Friday, December 18, 2020

Sub-Optimal A/B Testing - Why?

By: Alon Cohen Jan 14, 2018, updated: Dec 18, 2020


A/B testing is the primary tool marketing people use to optimize conversions in the digital marketing world. It’s a method to find the better converting version of a webpage or an ad.


The way A/B testing works is that you randomly present page A or page B to your website visitors and check which version of the page converts more visitors. Some tools (like https://www.optimizely.com) make that process relatively straightforward; however, it takes time to collect sufficient evidence to make a clear decision about which page performed better.


So why even bother?


The problem with webpage design is that it is hard to get it right the first time. A designer might think that the call to action button is in the way and move it to where it becomes ineffective. Color schemes and market trends affect how people perceive, understand, and operate a page.


Statistically significant results from an A/B test can help validate a webpage design assumptions and improve on them.


What can go wrong?


If, for instance, you did not assign a 50/50 impression between the A & B versions of the page, you might think that one page performs better. 


In many cases, unless one page is horrible, the other will perform pretty close to the first one, and the statistics can change week after week. You must wait a sufficiently long time, sometimes a few weeks, to get a decisive answer. 


Picking the wrong page will reduce your conversions.


Since you usually use multiple channels to target customers, changes in one medium might affect the A/B test results. Even though A/B testing could be helpful when your two pages are almost similar in performance, it usually takes a lot of work to get conclusive results, and ongoing testing is required.


So what is going on here?


Say you have used the best tools for A/B testing. You waited and got some slight statistical confirmation that one page is better. Why? Because one page was suitable for some people and the other was helpful for others.


The audience is not homogenous. When the results are close, half of the audience liked version A, and the second half liked version B. The sad product is that your bottom line stayed the same despite all your optimization efforts and patience.


Let’s say that 50% of the people like Red and half like Blue to explain it better. If you make the page Red, more red-liking people will convert. If you make the page Blue, more Blue liking people will convert. So it does not matter if the page is Red or Blue; conversions will not improve. 


Is there a solution?


Ideally, you need to have a different version of the webpage for each visitor or at least a separate page for various market segments or buyer personas. Only then could you see improvement in your total performance.


Unfortunately, I have yet to see helpful marketing tools that can tell you (the site) in real-time which version of the page to render to which user. The hope is that such a tool or an API will enable correct personalization of the webpages and drastically improve the conversion. 


It is not about knowing the visitor’s name. It is about understanding the visitor’s social or behavioral profile and displaying the correct page for each visitor’s characteristics.


In simple words, you need a way to show red-loving people the red page and the blue-loving people the blue page. This way, you can improve the total conversions and move from a "local" maximum on the optimization graph to a more "global" sales process optimization.

 




Thoughts?


Have you heard of such a tool or an API? Let me know.



Wednesday, July 29, 2020

Corona - Silver Lining?


By: Alon Cohen

We are social distancing and working from home. We learned that it is possible to live and work that way, maybe even work more efficiently as we spend less time on social gatherings and save the daily commute. As one who worked from home for many years, I can say that it works even better when everyone is working from home.  


From the technology perspective, people finally discovered video conferencing and learned how to use it better with the benefits and the few problems it presents. Many even learned the importance of good lighting and an impressive backdrop.

Engineered or not, the virus highlighted the upcoming international battlefield. I wish these were not needed, but I hope that we will have an increase in the research budget for developing protections against such biological weapons. As a by-product, I hope it will help find cures for health problems that plague our society.

We suddenly discovered that developing Vaccines should not take ages. Hence the cost of development “as it usually takes ten years” being the usual excuse of the pharmaceutical companies will no longer be valid. 

Pharmaceutical companies need to stop wasting time searching mostly for medications to sustain life. They rather find ways to actually cure or prevent diseases using vaccines, regenerative medicine, and genetic-medicines

Given the ability of those genetic technologies to cause disasters on a global scale, it might be necessary for nations to own the IP (Intellectual Property) for drugs and vaccines by financing and directing the research. 

That emphasis should be on finding cures for diseases. A cure by its nature is a less lucrative outcome for the pharmaceutical companies because once they find a cure, the disease could be vastly eradicated. 

By directing and financing biological research, governments can legally leverage that new IP in the upcoming biological warfare.

Interestingly we may have created and proven a new business model where Phramacustical companies get paid a lump sum for a product that prevents disease, a vaccine. (Operation Warp Speed)

Maybe we can extend this model, and regulate pharma so that if they want to sell us non-generic drugs, they must introduce at least one preventative or curing medication per year (as opposed to life-sustaining medicines). Once they present such a cure, the pharmaceutical companies will get paid handsomely, upfront, because it will be worthwhile for the economy to get those newly cured, healthy people, back into the workforce.

We also learned that volunteers are willing to test new vaccines to save lives to help test them faster or as they call it one day sooner. Those people are heroes in my book.

I think we learned that as a nation, we must invest more in quantum computing as it holds the future for faster material and medicine research that would otherwise take years to achieve with standard computers.

I think we realize that when the doctor says, “you contracted a virus, there is nothing we can do...”, he is probably wrong. 

With the correct focus, we can defeat viruses from Herpes to the Common Cold, maybe even Influenza. By doing so, we contribute to the economy vast amounts of money that today is being wasted on hospitalizations and lost workdays. 

We used to say that trains, then the aviation, then the Internet made the world smaller, we now realize that infectious viruses make it even smaller. A Virus does not require infrastructure or energy to travel - just abondance of people. 

Internet communication can be blocked, intercepted, manipulated, and fire-walled. The Coronavirus showed us that viruses could jump any border and if we embed data in the virus DNA we can pass information in a direct person to person manner in a way that might be intercepted but not blocked or censored.

It means that a virus can be useful as a communication chanel. A virus can be “developed” as a data carrier. We can use the virus to broadcast information with, hopefully, positive ideas, maybe even a complete version of uncensored Wikipedia, and use them to circumvent even the “Great Wall of China” or thair firewall in today's term.

###

Monday, September 3, 2018

From Hardware to Serveless

By: Alon Cohen, Phone.com EVP/CTO, Sep 3, 2018

Amazon AWS, Google Cloud, and others are now chasing each other to see who will win the Serverless revolution. Amazon calls it Lambda Functions, Google named theirs Google Cloud Functions, and Microsoft calls it simply Functions. The generic name is Serverless Architecture.

The idea is simple to understand: instead of buying a physical server and placing it in the data center, or buying a virtual server instance, configuring the server, installing the needed software and hoping that it will hold the expected load, a developer can now split the traditional monolithic code to a defined set of business logic tasks that can be invoked by a URL, and invoke those tasks as many times as the application requires without thinking about scale, load balancing, networking aspects and more.


It sounds simple, and it is, once you get the hang of the Dashboard provided by Google or Amazon to manage, monitor and define the proper components you need in your solution, and the component access rights. When the configuration is done, all you need to think about is how to break the monolithic process you had in the past to a nimble set of small tasks.

In fact, there are few programming paradigms that you need to leave behind in order to take full advantage of this new architecture.



In the past developers used to build large data objects to hide information from other tasks and yet provide unified access to the object state and properties. This approach in most cases dictates a large monolithic application with nested data structures that are hard to maintain, requires long release cycles and long regression testing. Every change you make affects large parts of the program and requires you to go through rigorous version management and re-tests.

To mitigate some of the above issues, people built a service-oriented architecture where different layers abstract different functions and allow the developers to make changes in one layer without affecting other layers. However this approach, as clean as it might be, still keeps the dependency of each layer on the other layers and so a change in one layer requires a unit test but also yet again requires the QA to go through a full regression testing of the whole system prior to every release. All of that translates to long release cycles.

Serverless for the open-minded opened up new opportunities. You still want to make sure you write a service once, but if you work correctly you can remove the global dependency of the code on a version of any specific function. In other words, in the monolithic architecture when you change a function, that change also affects every part of the code that calls that function, object or service. That dependency is the key problem. So how can we change that?

Instead of creating abstraction layers or services that depend on other services, you create a shared library of global functions. And here’s the key: when you pack a Serverless Task you include the latest version of the shared library code, test that small Task, and deploy. None of the other serverless Tasks that are running and using that Shared Code are affected. The other Tasks are using the version of the function code they were tested with. In fact, they do not even have to be taken down as you update other Tasks. Most likely, if you built it correctly, a Task will be completely independent of any other Task in the system.

If you adhere to that flat architecture, you will now enjoy rapid bug fixing. You will have a very small blast area, meaning that a bug in a newly updated Task affects only the availability of that Tasks and may not disrupt any other Tasks in the system. You also do not need to maintain a Master Branch for the whole app, but rather manage each Task version independently. Since the program is split into small Tasks, there is very little need in merging code managed by a few people. Most Tasks are written by a single person. And, by maintaining a good set of Code Style conventions in the organization all developers are now fungible, meaning any developer can understand and fix other developers’ code.

How much does it cost?

At Phone.com we used to handle call events in our Core API in a traditional data center. We needed about four instances and two load balancers between the API service layers, to support a barrage of call state events coming from our telephony servers, millions per day. Once we moved the event handling to our serverless environment, where you pay on a per Task invocation, we reduce the cost by a factor of 10.

Amazon AWS also offers a serverless database called Aurora, which is still in Beta. I cannot wait to see how they price that in comparison to other database options. In the instance-based architecture, one will pay about $70 per month for the smallest database instance before data has even started collecting.

There are more interesting aspects that contribute to system stability in case of a crash. If a Task crashes it affects only that Task, the next request will launch a new instance of that Task and will run the Task again.

Is it Secure?

Security aspects are handled by creating a Virtual Private Cloud isolated from the outside. By using an API Gateway module, you allow public access to specific functions. The API gateway lets you define your own authentication and authorization mechanism.

Additional Gains

Instead of Cron jobs, you can now tell a function to start at given intervals. If you need to run a large report that normally takes a long time to generate, you can create a recursive structure where you split the report to hundreds of small segments each one spawning the same Task all running in parallel. As the Tasks finish they aggregate the report segments back to one large report. All that happens in seconds without the need to consider scalability aspects. An overnight type report is now being produced in seconds.

Once a function finishes it lingers a bit, for free, to allow for faster start time and if not invoked for a while they just die and release AWS resources.

Reduce OPS overhead.

Source: https://specify.io/concepts/serverless-baas-faas
Phone.com is still in the midst of that transformation, however, we have already harvested the benefits in the form of happy customers sending our employees huge edible arrangements of fruits and thank-you letters of appreciation for the quick turnaround off almost any request they come up with.

###

Sunday, January 7, 2018

Reality & Fake News

By Alon Cohen: Jan 7, 2018

I did not write any blog posts for a while. Part of it is probably laziness or the fact that if I have nothing to say I just don't say anything. This time is different. Google forced me to write something or they would delete my blog.

Last night I was given a warning shot when they blocked my access to a Google Form that I created, without any explanation. For those who know me in 2015 they completely shut down my Google account for three days and that almost erased my existence from the earth so I am taking that warning shot seriously. This shot was also probably related to a warning about deleting this blog if I don't post a new one.

So here I am.

So what is the reality? In the old days, it was history books, which told us about what transpired in the past and we thought this was the reality back then. History books were written based on printed documents and stories passed. Now we have Internet and TV that are so up-to-date that we tend to believe that this is the reality.

It is clear to everyone that create content that whatever is recorded on the Internet will be the historical documents that will tell the story years to come.

Yet how real that story will be is debatable.

There are researchers that claim that the Bible is Fake News. They ask why there are no Jewish ruminants in Egypt? They claim the Bible is just a collection of stories.

We know from our own life experiences that reality whatever it is, it is different for every observer. One can only assume that back then without Internet and fast way to propagate and record information it is possible that the core of the stories happened but the rest was just embellishments and gap closing of the storyteller.

Well, this is the best they could do back then.

We don't have to go that far, 40 years ago or so there was an article about my Dad in the newspaper. Apparently, someone thought he was getting paid too much for keeping few hundred passengers safe on for 12 hours cross-Atlantic flight (before the days of the computers). Ohh well. However, the inaccuracies in the article about him our family and his work were so profound that I asked my Dad about how real was everything else you read in the newspaper. His answer 40 years ago was “Don't believe anything you read, the only true thing is the date”. We checked the date and on that day even the date on the newspaper was incorrect.

Maybe a thousand years from now, people would debate if that day even existed since there are no Israeli newspapers on that date.

Over the years I also had the opportunity to see articles written about me and again and again, the embellishments were so profound that you could almost miss the reality. You know what is true and what is not when it is about you. Everyone else, however, who kind of knows you, think this is all true. After all, it is in black and white. Apparently, and I have tried it few times, reporters do not like you to see the article before they publish it even if you just promise to do a good fact-checking.

Well, I thought they just don't “like” it but hey, when you invent everything you write there is really nothing to check, it is all fiction.

Moving to the now. It is clear that objectivity does not exist. Even in court, they would change your offense from speeding to improper lane change if you get a nice prosecutor. Or change your speeding from 20 above to 15 above in order to "help" you keep your insurance cost at bay. In NY city they don't even care if you committed any offense or not, you are always guilty. That modified reality is what eventually makes it to an official record.

So fake reality and real reality, are just not related, and the fake is the only thing, which is being recorded or saved. Even if the real reality is recorded it is mostly edited to its fake state and this is what is stored.

For that reason, it is so important that we always try to question what we read or see on TV and the Internet. I mostly like it when one TV stations expose fake video segment by playing back the real segment (but who knows maybe it is just another fake segment that seems real).

As technical people it is important is that we start thinking about how to record reality in a way that is immutable, signed, accessible to anyone (not just Google and other media outlets who can both alter, delete and hide what they don't like) but rather technology that provide access to raw footage that cannot be doctored so that innocent people, can prove innocence, and people who like real over fake can go back and see what actually transpired.

(Liquide metal or Halloween? - Gal & Jen)

Most importantly since AI will be very instrumental in defining our future we need that AI to be built (and trained) based on the real historical data and not base its future decisions and predictions on the Fake News being created every minute in this day and age.

What are your thoughts about this? Let me know.


Monday, November 14, 2016

From VoIP via UC to WebRTC

By: Alon Cohen EVP/CTO Phone.com
I recently wrote an article which got published in CIO Review magazine under the title "From VoIP via UC to WebRTC". The article is about how the telephone that dominated business communications for almost a century changed.

As the technology was advanced, it seems that each generation of new communication products and services was more and more determined to block any chance for real-time synchronous voice communications.

Starting with the answering machine in the late 20th century, then email, and finally, enterprise voicemail and messaging.

The phone call went from universally synchronous to universally asynchronous communication.