Bezos’ Law

Computing Over Time

When we think about advances in computing power, most people think of Moore’s Law, Kryder’s Law, or Nielsen’s law,  which are about the pace of technical development. Moore’s law is the most well known, and talks about the pace of computational intensity doubling every 18-24 months. Kryder’s law is about storage, and doesn’t have a speed attached, but notes that storage capacity increases at a much faster pace than Moore’s law. Finally, we have Nielsen’s law, which is about Network throughput. Essentially, Nielsen theorized that since Network performance was increasing slower than computational intensity, we would always be network-bound (ergo the network would always be the limitation) and for all intents and purposes this has proved to be pretty much true.

So we have the three immortal axioms of computing: processors double all the time, storage moves faster and networks move slower. That’s all well and good except for one thing…

Moore’s law is broken

In case you didn’t get the Memo, Intel hit the power wall in 2005. Basically, up until about 2005, Intel was just making one chip as fast as possible. Damn the torpedoes, Intel was going to make the biggest and baddest chips on the planet. One day they woke up and companies like Facebook were buying AMD processors because the performance to power ratio was so much better. In fact, the cost of operating Intel’s Beefy Single-Core chips was such that many companies were evaluating other computing platforms.

How were these companies beating Intel, the juggernaut of CPU systems? More cores. So in 2005, Moore’s law was really fractured, at least from the users perspective. Why? Computer programs were almost universally written for one core because Parallel computing is hard. It’s getting easier, but it’s still hard.

The companies that will dominate in the future are those that can consume many cores simultaneously for processing, versus the companies who dominated in the past simply by throwing more processors at the problem. You can still throw money at the problem, but your code has to change or you’ll suffer diminishing returns on your investment. That’s why Moore’s law ostensibly broke in 2005, because code doesn’t automatically scale with hardware anymore.

Yes, Moore’s law still holds true doubling speed every 18-24 months, but now you do so by adding ever-more cores. Right now, the programming languages many people use are not built to take up ever-more cores, or do so but with a lot of expensive overhead to make this transparent to the programmer and the user. So the limitation now is less infrastructure and more creativity (programs written for multi-core environments will now run exponentially faster than clumsily written ones).

Amazon Web Services and Computing Over Time

AWS, coincidentally, was founded in 2006, launched by Jeff Bezos as an experiment in aggregated computing. The thesis was this: sell computing like a utility. Jeff and company benefited from a lot of trends, but none was more important than the shift in importance of power to performance ratios. Amazon wanted cheap chips that didn’t use a lot of power, and they wanted a lot of them. What many of us don’t realize is what a large component of the overall computing ecosystem electricity becomes at scale. Power, is everything.

AWS, like most Amazon services, is playing the long game. Jeff’s plan is pretty simple: If Amazon operates at 0 margin for a long enough time, no one will be able to compete. It’s an interesting idea, but one that can really only be proven over time. Which brings me to my idea of a new Law: Bezos’ Law.

Bezos’ Law

The Cost of Cloud Computing will be cut in half every 18 months – Bezos’ Law

Like Moore’s law, Bezos’ Law is about exponential improvement over time. If you look at AWS history, they drop prices constantly. In 2013 alone they’ve already had 9 price drops. The difference; however, between Bezos’ and Moore’s law is this: Bezos’ law is the first law that isn’t anchored in technical innovation. Rather, Bezos’ law is anchored in confidence and market dynamics, and will only hold true so long as Amazon is not the aggregate dominant force in Cloud Computing (50%+ market share). Monopolies don’t cut prices.

So where does that leave us? I suppose that depends on how much you love infrastructure. If you love it, this is probably irrelevant to you, but if you don’t this means that the cost of your application will be cut in half every year and a half. That’s a big deal.

Conclusions and/or Considerations

If you’re a programmer, and if you believe in these predictions and the approaches people are taking to improved capacity and performance, your programming style may need to evolve with these changes, too. In short, write Parallel code because it’s better, and realize that the world is migrating towards more cores as quickly as they can because of power consumption patterns and operational considerations.

A Quick Note

At 2600hz, we think a guy named Joe Armstrong nailed it when he wrote the language Erlang. There’s a lot to love, but some of the key highlights for us are trivial parallelization, actor-model with per-actor garbage collection, and massive serialization. We use Erlang as the core of our stack, binding together our Border Controller, Media Servers, Databases, and Applications. It’s not always about the language you use but in our use case, we’ve found Erlang to be scalable, modular, and ultimately fit to the task of managing Global Communications Infrastructures. We feel that this tech, along with the engineers implementing it, give us a significant edge in getting to market and staying online.  In addition, it’s a rough world out there and in short order companies that can’t parallelize will be left with massive legacy cost structures.

In a nutshell, Parallel code is where it’s at and at 2600hz we leverage Erlang to help get us to many cores. We’re well on the way and we think this is the direction computing is heading. Parallel Computing ensures Moore’s Laws longevity.

Do you think parallel code is the right answer? Are you worried about your legacy communications system not scaling? Talk to us, we can help. Ring us at sales@2600hz.com to chat today!

Please note, this was originally published on the 2600hz Corporate blog at http://blog.2600hz.com/post/55614383443/bezos-law
Uncategorized

How Twilio Might Raise their 50M Series D Round

It’s Clobbering Time!!

Twilio is a marketing juggernaut and probably the best developer evangelism organization in the valley (maybe the world). They have amazing documentation, a lot of great engineering talent and a business model that can scale. They have raised 33.5M and are now attempting to raise an additional 50M through a Series D Round. Here’s a list of some of the concerns and the benefits of giving Twilio this money:

Concerns

  • What’s going on with Evan Cooke? Is he still the CTO? Evan is smart as a whip but it has to be scary that he’s on Sabbatical advising many startups all over the world instead of working on Twilio’s Pre-IPO Infrastructure.
  • Selling SaaS Applications into Enterprise and Big Telco is hard. Whatever happened to the AT&T deal? As far as I can tell that’s dead in the water.
  • Competition is fierce from companies like Voxeo. Telecom is a cut-throat market and there’s a question of how long Twilio can maintain their excellent margins.

Benefits

  • Twilio has a TON of developers and they love Twilio. You can’t buy that kind of attention. They clearly have a vision on how to work with Developers and they do it extremely well.
  • Twilio is big. They have a lot of employees, they move a lot of widgets; they are definitely a much more serious company than they were 2 years ago. They even hired that awesome dude from Salesforce to turn up the heat on revenue.
  • They have proven their Thesis: Telecom Services must be exposed via API. This is perhaps the most important point.

So should Fred and Union Square pay Jeff the money?

It’s a very tough question. From a financial perspective, Twilio would likely need to exit at between 400 and 600M for this to be worth it. I expect that after four rounds the dilution that the founders have experienced is severe, but that’s to be expected for a late stage company. From a scaling perspective, they need to identify new markets and attack them aggressively. If I were in Twilio’s shoes, here’s what I would do:

Step 1) Rock the Enterprise. Twilio has to find a way to own the Enterprise. The SIP out stuff is definitely a step in the right direction but frankly Tropo from Voxeo still has them beat in terms of feature depth. Twilio is a lot easier to work with when compared to the offerings from Voxeo; the APIs are easier to consume and while that used to be irrelevant to the Enterprise, the trend of Enterprise Consumerization really favors Twilio’s simplicity. I’d hammer this. It’s tough to compete with Voxeo because they have a great suite of products, but I’d say Twilio has the edge in terms of integration ease. What Twilio really needs to do here is find a way to go inside the corporate firewall. It’s a hard problem, but that’s how they can hit the kinds of compliance Enterprises need.

Step 2) Crush International. Look, I work in Telecom, I know International is really hard, but Twilio actually has a very robust offering here. To IPO, growth opportunities need to be rampant. Some partnerships to extend functionality in Asia would be incredibly valuable. Leading the conversation in this direction would be advisable.

Step 3) Take money from a Telecom VC. Twilio needs a strong Carrier partner. Taking money from a Telco VC would signal that the Twilio team is serious about going into telecom and working with the big Carriers (as opposed to trying to strangle them in their sleep). There’s definitely a lot of confusion within the Carrier ranks about how Twilio is going to partner with them; signaling a willingness to work with Carriers by taking money from a telecom VC indicates another robust growth market.

So should Fred and company pay Jeff? If Twilio can be sold for 500M+ or IPO with revenues in excess of 40M per year, I would think paying them would be a no-brainer. If Twilio is not going to hit 40M in revenue next year, the question becomes more murky. I tend to think a private sale for a SaaS company would be at 8-13x revenue (much lower multiples for Consulting companies, higher multiples for advertising companies. Basically a question of margin) and an IPO could really be any multiple (IPO pricing varies with the tea leaves).

Honestly, with the Tumblr acquisition, I’m not sure how anything works in the valley any more.

What do?

I think very highly of Jeff and the Twilio team. I tend to think they’ll be successful in raising this money, not only because they’ve got a pattern of success but because they have to do it. There are obviously some hurdles but any good story is not without its difficulties.

Uncategorized

Carrier Subsidies and Fuzzy Math

Fuzzy Math

Ol’ G Dubya is out of the Whitehouse but we’ve still got a lot of fuzzy arithmetic to sort through. T-Mobile is the latest guilty party, what with their advertising reaching the level of “deception”. I’ve got nothing against profits but I’d like to do a little math for the consumer. Read more »

Uncategorized

Finding the Prettiest Girl at the Dance

The State of the Art in Unified Communications is becoming boring. Telling the difference between vendors is becoming an exercise in the minute details of RFP processes. It wasn’t always this way, and here my friend Dave Michels of TalkingPointz has some comments from the recent UCSummit: Read more »

Uncategorized

How Wiretapping in the US helps Oppressive Regimes Monitor Web Traffic

Technology

This is a technical discussion of how US policies enable Oppressive Regimes to monitor their citizens web traffic. I’m not going to discuss the legality of these methods or their ethical place in society. I’m only talking about the technical reasons for the dispersion of this technology; I am not advocating or deploring its use. Read more »

Uncategorized

What is Proximal Networking and why should I care?

Proximal Networking

The Problem

Seems like there’s a new networking idea in town, and it’s got some very interesting properties. When thinking of wireless networking, today we all communicate via access point nodes either on Wifi Access points or through Cell towers. Our cell phones connect to a centralized network of towers and our computing devices at home connect to our wifi-enabled router. This is fine for residential or closed commercial networks, but this centralized model suffers under the increased load of events or other large public gatherings. It can thus be extrapolated that the public wireless network can, and likely will, be inoperable in places with too many people (this is because of Shannon’s Law which relates to information theory, specifically signal to noise ratios and congestion). Read more »

Uncategorized

Sprint, Softbank and Dish: The Love Triangle from Hell

The Love Triangle from Hell

It was bound to happen. Sprint, the 3rd place American Network operator, has been looking for suitors for some time and with a price tag north of $20 Billion, it’s a big acquisition target. Softbank, the Japanese conglomerate, tried to purchase a controlling interest in Sprint for roughly $20 Billion, but today, news comes out that Dish Network, the US Satellite operator, has bid $25.5 Billion. What a crazy set of circumstances; let’s break it down! Read more »

Uncategorized

How Swatting Works

How Swatting Works

Brian Krebs, an investigative journalist working for the Washington post, has the dubious distinction of being one of the first journalists to be “Swatted”. For those of you not up on the hacker nomenclature, “Swatting” is the practice of using forged information to send a heavily armed police team to an unsuspecting victim’s house. This is a deplorable practice, but, as with all threats, the only way to defend yourself is to understand the vector of assault. Read more »

Uncategorized

A Tale of Two Startups

A Tale of Two Startups

We are citizens of a global village. Some of us are more tied to one particular area, but we are all members of a growing global society. The connections that bind us seem to grow stronger each day and we may have a number of the great social tools at our disposal to thank for this. My question today is simple: are the social systems we use each day capable of supporting more than just random interactions? Can you build a business with someone else’s data? Read more »

Uncategorized

On Delphi and Looney Toons: Oracle acquires Acme Packet

Disclaimer: At the time of this article’s publication, Joshua works at 2600hz in a marketing capacity. The thoughts and opinions expressed in this piece are his own and not reflective of 2600hz opinions.
 

Bugs Bunny and the Oracle of Delphi walk into a bar…

Oracle acquired Acme Packet for somewhere between $1.7B and $2.1B a couple of days ago. That’s a lot of money for something most folks don’t understand (Session Border Controllers) and something that’s really outside of Oracles core business. Let’s dive into the reasons Oracle might’ve made this purchase:

Oracle wants to play with the Telcos

Historically, Oracle has had some limited penetration in the very large Telcos but they’ve had trouble breaking into the long tail of the Telecom world. Particularly the rural CLECs and international minority operators represent a huge chunk of untapped potential revenue and purchasing Acme Packet gives Oracle an “in” with all of these organizations. Acme packet is the SBC provider of choice, and since it’s basically impossible to run a modern telco without this technology, it makes sense that Oracle would want their tech, but that’s also a good reason for Acme to want to stay independent. Ultimately they chose not to.

Let’s dive a bit deeper into what I perceive as the 3 biggest reasons Oracle and Acme agreed to this marriage.

Acme Packet had a lot of liabilities and was being actively threatened by open-source tech

Acme Packet was a large company with hundreds of millions of dollars in revenue. They also have very skinny margins due to their rapid growth. It seems as though Acme would continue to grow and while that might be true in the short term there’s a very real existential threat to their entire way of doing business. The so-called New Carriers like Twilio and 2600hz don’t run Border Controllers from Acme, they run open-source stacks like OpenSER and Kamailio. The differences in cost structures associated with running an open-source SBC as compared to a closed source stack are greater than those between a bicycle and a Ferrari.

Open-Source SBCs are cheap and powerful. 2600hz runs Kamailio and the hardware we deploy on costs $500-1500 per server (less if virtualized). To support an infrastructure with a few thousand calls you’re looking at spending well over $50k with Acme which is incomprehensible to a startup. I am not the target market for Acme, and yet I find myself partnering with them on a number of deals simply because of their proliferation in the market place.

Acme’s approach to Session Border Controllers plays right into the Oracle portfolio

This Open-source pressure is irrelevant to Oracle. The Acme way of selling things is eerily similar to Oracle. Acme positions its SBCs as best-of-breed and as far as appliances go they are. Oracle operates much the same way, and they both already employ the complex steak dinner/love affair enterprise sales process. They both sell large ticket hardware with perhaps even larger licensing. The hard driving sales departments at both companies will work well together irrespective of the time to integrate. Hard-nosed sales folk hit their numbers no matter what obstacles sit in their way.

It seems like a good cultural fit to say the least.

What happens now?

Folks that currently enjoy Acme Packet can look forward to awesome Oracle sales pitches. Along with their SBC, carriers can look forward to learning about Oracle DB products and other accouterments. The real question for me is: Is Oracle trying to play the Telco game?

The case for Oracle becoming a Telco Provider

Well, they already bought Acme right? Now all they really need is a switch and they’ve got a full blown competitor to the Broadsofts, Metaswitches and Ciscos of the world. Oracle is one step away from competing blow for blow with the big boys, but it’s risky to jump into telecom land. While the reward is large, the road to hell is paved with good intentions. Countless organizations have torn asunder by attempting to jump into Telecom.

The case against Oracle jumping into Telecom Madness

Telecom is a mad, mad world. It’s hard to begin to explain how ridiculous Telecom is when compared to an industry like Database. Whereas Database has had dramatic rapid changes on an almost yearly basis, Telecom has been virtually stagnant for 100 years. Oracle would love to play in the Telco world, but they do not have a lot of experience when it comes to routing technology. Real-time media management is not something Oracle specializes in and therefore in order to pursue this market, they’d have to make a considerable investment in personnel and expertise from consultants. It is also unlikely that Oracle will pick the right answer the first time as no one in Telecom ever does, and therefore the costs will be considerably larger than a one time expense.

There’s gold in them there hills, but is Oracle the miner or the Pickaxe salesman?

Acme Packet is a pickaxe, but Oracle could become a miner (or at least a better salesman) with a switch. It’s a question of ROI and I would hate to be the MBA who has to make those calculations.

In summary, Oracle bought acme packet while they were looking to exit. Their cultures line up nicely, but I don’t know whether Oracle actually wants to get into telecom or if they’re using Acme to sell more Database boxes. If it’s the latter, this investment is essentially buying an email list. If it’s the former, things might be very interesting around Q2 2014.

 

Uncategorized