.EXE Interview with Anders Hejlsberg on Delphi (1995)

To commemorate the 25th anniversary of Delphi on Feb 14, 2002 here is a transcript of an interview with Anders Hejlsberg, Chief Architect of Delphi conducted by .EXE Magazine editor Will Watts from 1995. Anders discusses the design and development of Delphi and the then forthcoming 32 bit version for Windows 95. This was the most detailed technical interview published about Delphi at the time.

Q. How did the idea for Delphi evolve from Turbo/Borland Pascal? At what stage did you decide to add the environment, database support etc?

Delphi andersA. The key idea was to design a tool that combines visual development environment, Client/Server database support, and a native code compiler. Before Delphi, you always had to make a choice. Do I go for the performance of a native code compiler, or the ease of use of a visual development environment? Do I go for a powerful object-oriented language, or a proprietary 4GL Client/Server tool? What programmers really want is all of the above, in one package. That's what we set out to do with Delphi.

What it really boils down to is productivity--we wanted to design a tool that would make developers more productive, all the way from prototype to production code. Other products lure you with visual tools, but once you get halfway through your project, they let you down because of sluggish performance, lack of extensibility, or general stability problems. The competition talks about adding extensibility and improving performance. That's a fundamental difference between their products and ours. Extensibility and performance was on the white-board the first day we started designing Delphi, and it permeates the entire product. For example, if you want to design a new component in Visual Basic, you have to write it in another language, such as C or C++ (or Delphi, for that matter). None of your VB skills can be reused, you have to learn a different language, and you can't easily inherit from any of the built-in components. Delphi, on the other hand, allows you to write new components in Delphi, and you can inherit from any of the built-in ones. That's true extensibility, and it translates into a substantial productivity boost.

Another key aspect of Delphi is its versatility. Other tools tend to focus either on Windows application development or on Client/Server development, and one always trades off the other. Delphi is equally adept at both, as is evident from the kinds of applications our customers are building. They range from shrink-wrap Windows utilities and multi-media games, through desktop database applications, and all the way up to multi-user enterprise-wide Client/Server solutions. The point is that almost any Windows application needs some form of database access, and any database application needs some form of Windows specific programming--to be productive, you need a tool that does both.

Delphi really leverages a lot of very mature database technology from Borland including ReportSmith, the Borland Database Engine, SQL Link native drivers for remove servers and the Local InterBase Server. Just the InterBase server alone is a tremendous technology that gives developers the ability to use full ANSI-92 SQL in their applications so they can begin exploring SQL and client/server development all on their local PC.

Q. You emphasise Delphi's versatility as an advantage, but surely it is also a drawback? If one needs to build a client/server application, PowerBuilder offers better CASE/database management facilities than Delphi.

A. There's an inherent advantage to being versatile. Look at the computer on my desktop. Do I need a dedicated word processor, a PC for my spreadsheets and a terminal with access to my customer records? No, I've got one PC that's versatile enough to do all these things.

A very large American retail chain--one of the largest--just standardized on Delphi over PowerBuilder precisely because their engineers can do 85% of all their work using Delphi versus 60% of their work using PowerBuilder. That saves them enormous amounts of money and complexity, including in ways you may not have considered. As an example, skills and techniques learned writing a small utility are directly applicable to client/server projects. A lot of today's programmers started out by writing those little command-line utilities in the good old days. It's a great way to experiment with and master the use of data structures, object-oriented techniques or learning about the Windows API. Consider, also, how using the same tool for a broad range of applications provides a company with a neat training path: Someone can start writing non-database programs and then gradually move onto projects dealing with valuable corporate data.

There's no end to the components and views you can add to Delphi. The population of programmers who can build components in Delphi is much larger than with any other tool on the market. We're back to the days when one programmer in one room can build and test something that can be used by tens of thousands of other people. Can you imagine what the
availability of specialized component sets will be like in six months? In a year?

I think the entire point of combining a component-based visual development environment with an object-oriented compiler and database technology is to make sure you never run out of gas. That's not a bug--it's a feature.

Q. If you want a quick and dirty hack, surely it makes sense to use Visual Basic, because everybody can use it without having to master a scary, complex language like Pascal. If you are doing multi-media or real time work, why mess around with a system which delivers slightly slower performance, and requires you to hand-translate all the header files for any DLLs you may need, when you could just use C++?

A. As we like to say, "It's not your father's Turbo Pascal any more". We made sure that the Object Pascal code you have to write is as easy as BASIC but without limitations.

We've taken great pains to make sure that when you're interacting with components, the code you write is as simple as possible--but no simpler. Many reviewers have remarked that they thought they were coding in Basic when they first started using Delphi. It's that easy. When they want to do something more interesting and start using the richness of the language, they usually start remembering how much they like Pascal.

In fact, I think you miss an essential advantage of Delphi. Anybody who has used a compiler--especially one that supports good type-checking--knows that a compiler is really a programmer's best friend. When it tells you it's probably not a good idea to take the square root of your Window caption, it's showing you a logic error in your code and saving you time. Is it an advantage that BASIC will perform automatic type conversions in that circumstance instead of giving you an error? I wish my spell checker program could complain about the logic of a paragraph I've written in the same way as our compiler warns you about illogical programming statements. Our 32-bit compiler goes even further and offers you all sorts of hints about problems it detects in your program. This kind of help is invaluable and one of the things that makes programming in Delphi very productive.

Q. What is the secret of Delphi's fast compile/link cycle?

A. Borland has over ten years of experience in building the world's fastest compilers, and we've put that knowledge to good use in Delphi- -it compiles at about 350,000 lines per minute on a 90 Mhz Pentium. A number of factors comtribute to this throughput. Delphi units (code modules) compile to .DCU files, which you can think of as a combination of a C++ precompiled header file and an .OBJ file. (It's funny how the hot topic in the C++ community is pre-compiled header files and incremental linking--Borland's Object Pascal technology has had these features for more than eight years.) Delphi units specify what other units they depend on through USES clauses--sort of like C++ #include's of header files. By analyzing the USES clauses of each unit in a project, the compiler can automatically perform minimal builds with no need for a make file. The net result is that the compiler never compiles more than it has to, and it never compiles the same thing more than once. Finally, the clean syntax of Object Pascal allows for very fast parsing.

Q. Is the compiler engine itself written in Delphi? How much does it differ from the Borland Pascal 7 compiler?

A. The compiler is written in assembly language. It is fully backwards compatible with BP7, and we've added lots of object-oriented extensions such as class references, virtual constructors, and the IS and AS operators. We did a lot of work to enable declaring, registering and filing properties and we generate run-time type information that's used to communicate published property, event and method information to the development environment. You'll see some interesting applications of that capability in our 32-bit release. One very unique enhancement was our use of bound method instance pointers to implement event delegation. They're very efficient and fit nicely into the language. And of course we did a lot of work to add structured exception handling. In addition, there are lots of little niceties that people have requested, such as support for C calling conventions.

Q. Delphi implements objects in a manner similar to Apple's Object Pascal, with all objects allocated on the heap. Previous versions of Turbo/Borland Pascal used a more C++ like approach, with the ability to alocate objects on the stack and statically. Can you explain the reasoning behind this change in approach?

A. It really is a question of features vs. complexity. The philosophy of Delphi's Object Pascal language is to deliver the RIGHT set of language features, as opposed to any language feature ever known to mankind. It's the well known 80/20 rule: You can get 80% of the power for 20% of the complexity, but squeezing out that last 20% of power makes the whole thing five times as complex to program. Mixing static and dynamic allocation of objects is one of those features that fall into the latter group. By implementing a pure reference model we were able to simplify the entire Delphi component library, and do away with a lot of the pointer management that plagues other products. Even though Delphi objects are allocated on the heap, in a typical Delphi application you never have to deal with allocating and freeing them.

Q. I find this answer quite surprising and counter-intuitive. You had already implemented mixed static/dynamic allocation, and therefore presumably cracked the problems involved, so why go to the trouble to revert to the Apple Object Pascal approach which you had initially rejected? Is, say, a stack allocated object, with constructors and destructors automatically called as the thing moves in and out of scope, really more complex than a heap allocated object, where you must make special provision to kill the thing off at the end of its life? I would have thought that the fact that the component library *mostly* frees objects automatically but *sometimes* doesn't would tend to add to rather than reduce the application programmer's burden. Also, the change in model must confuse both existing BP programmers and also migrating C++ users.

A. Again, we didn't revert to anything because we really started with a clean slate. Our class reference model is sufficiently powerful and flexible, so having only one sort of class is actually an advantage. Once you give someone two ways to do the same thing, you have made your product less usable and you have to now help them understand when to use a statically allocated class versus a dynamically allocated one. We're quite happy with the choice we've made. It's simple to understand, efficient, and allows us to add garbage collection in some future release. And, of course, if you've got old code from BP7 that uses old style objects, you can still compile it from within Delphi.

Q. Exception-handling - what were the major influences on your design?

A. We looked at a number of languages and implementations, and were most influenced by C++ and Modula-3. Delphi is like C++ in that exceptions are classes, but more like Modula-3 in terms of the supporting language constructs.

Exceptions are a quiet revolution--they truly simplify the way you write code. For the most part you can write your code as if errors will never occur, instead of spending the bulk of your time trying to determine if an error occurred, and if so, how best to clean up and back out of what you were doing. Delphi's Visual Component Library was designed from the ground up with exception handling built in, and that is a large part of the reason why Delphi and applications written in Delphi are so fault tolerant. One of my favorite demos is a little two-liner that, on the click of a button, assigns NIL to a pointer, and then dereferences the pointer. Each time you click the button, Delphi reports that a General Protection Fault exception has occurred, but because of the built-in exception handling logic, the app keeps running instead of bringing itself down.

Q. I'd like to draw you out a bit to expand the answer above with a few specifics.

A. As in C++, an exception in Delphi is simply a class, which means you can take advantage of the inheritance mechanism to handle whole sets of exceptions easily. For example, Delphi declares the following classes which deal with floating-point exceptions:

type
 EMathError = class(Exception);
 EInvalidOp = class(EMathError);
 EZeroDivide = class(EMathError);
 EOverflow = class(EMathError);
 EUnderflow = class(EMathError);

As you can see, EMathError is the ancestor of the other exceptions. Here's an example of a TRY..EXCEPT statement that handles floating-point exceptions 

try
 PerformCalculations;
except
 on EZeroDivide do ...;
 on EMathError do ...;
end;

If the PerformCalculations procedure raises an EZeroDivide exception, it is handled by the first handler. If it raises any other EMathError exception, the second handler takes care of it. Since there is no ELSE clause, no other exceptions are handled--they are instead propagated to an enclosing exception handler.

Q. Delphi's ability to handle GP faults is indeed one of it's neatest tricks. Was it difficult to implement?

A. It wasn't too bad, but it did take some nitfy use of TOOLHELP.DLL which implements the Windows low-level system tools interface. We basically register an interrupt callback function which maps processor faults into Delphi exceptions. The reason that it all works, though, is that VCL was engineered from the ground up to be exception aware. Because
of that, when a GP fault occurs and is mapped into an exception, the operation that was in progress will automatically know how to back out and clean itself up.

Q. Can we expect any other major syntax additions/changes, for example Eiffel style assertions?

A. We're always evaluating new language features, and surely there will be some in the upcoming 32-bit version. I'd rather not get into specifics, but as a rule, we don't really think about language extensions in the abstract. Instead we look at the language as part of a bigger picture (class library, component model, visual environment) that must evolve as a whole to support new technologies and improve ease of use.

Q. Can you give Delphi programmer's any guidance on how best to write applications that will be portable to the 32-bit version of Delphi? The new "Cardinal" data type has arrived almost completely unnoticed. Are there any other issues we should be aware of?

A. Delphi's Visual Component Library was designed with portability in mind. As long as you stay away from in-line assembler, 16-bit pointer atrithmetic, and Windows 3.1 API functions which aren't supported in the Win32 API, your apps should port with little or no modification.

The Cardinal and Smallint types were introduced to facilitate portable code. Of the built-in types, Shortint, Smallint, Longint, Byte, and Word have identical representations in 16- and 32-bit code. The Integer and Cardinal types, on the other hand, represent the most efficient signed and unsigned integer types of the particular platform. In the 16-bit version they are 16-bit entities, and in the 32-bit version they are 32-bit entities. In general, you should use Integer and Cardinal whenever possible, and Shortint, Smallint, Longint, Byte, and Word only when the exact storage representation matters.

Any 64K limitations found in the 16-bit version will disappear in the 32-bit version. For example, the 32-bit version allows you to declare arrays and allocate heap blocks of any size up to 4GB!

Q. What is the current state of the 32-bit version? Will it support 16-bit VBXs, like BC++? Delphi 16-bit code runs somewhat slower than C++ - are you doing anything about this for the 32-bit version?

A. Delphi was written to be portable--we've been working on the 16- and 32-bit versions in parallel since day one. The 32-bit version is in field test now, and it will ship shortly after the commercial release of Windows 95. Yes, there is a foundation of 32-bit VBX support technology available in-house, but our primary focus is OCX controls. That's what the competition is working on, and that's where we see the market going. With respect to better code generation, Delphi-32 generates the same high-quality code as Borland C++ 4.5--in fact, they use the same optimizing back-end code generator.

Q. Are there any plans for Borland produced or badged add-ons for Delphi, in addition to the Visual Solutions Pack?

A. We just released the RAD Pack for Delphi, which includes Turbo Debugger for Windows, Resource Workshop, the Resource Expert, Visual Component Library source code, the--much requested--Language Reference Manual, and Visual Solutions Pack 1.1. We did have some quality problems with the initial release of VSP, but those have been resolved, and we now have a Companion Products group to provide Borland-quality add-ons, such as Notes support for Delphi programmers and other often requested components.

Q. Delphi is a terrific tool for rapidly developing state of the art software, but a number of shareware authors have expressed a wish that executables could be made smaller. Is it technically feasible to create a DLL-based version of VCL? Surely this must be possible since COMPLIB is a DLL which is used by the Delphi design environment?

A. It's something we're looking at, and certainly some of the 16-bit complexities with respect to multiple DLL clients are gone in 32-bit land. At this point I can't really comment on specific solutions, other than to say that we're actively looking at ways to make our executables even smaller.

Q. A long-standing and major criticism of Borland Pascal is the proprietary nature of the object file format. It's appreciated that going to the OBJ file format would be a retrograde step, but why won't Borland at least document the file format? That way, developers can create their own tools such as disassemblers, C to Pascal linkers and so forth. Again, it's understood that the file format changes with each release of the compiler, but documenting the changes with each new version would enable other developers to create conversion tools even if Borland don't want to do this. At the moment, if you don't have the source code, all your units become useless each time the compiler is updated.

A. We're well aware of these issues, and the 32-bit version will address them in a number of ways. What I can tell you at this point is that the 32-bit compiler has an option to produce
.OBJ files, which can be linked with .OBJ files produced by other compilers.

Q. A related issue: the move to Windows has diluted the importance of the OBJ issue, because you can now call DLLs. But the Delphi user has still to translate the (typically) C/C++ headers into Delphi import units, an exercise which is at best tedious and time consuming and, if you happen not to have had C++ experience, quite hard. It's the sort of job best left to a machine. Given that Borland has a lot of C++ parsing expertise lying around on the ground, have there ever been any plans to create such a tool?

A. Well, I'm not sure which C/C++ headers you're talking about.
We've already translated all the Windows and OLE 2 API header files, and corresponding interface units are included with Delphi. But you're right, if you have a 3rd party DLL that was previously only interfaced to C/C++, somebody will have to do the translation. Usually, it's not that bad and I think you'll see an increasing number of vendors providing Delphi interface files for their DLLs. Also, I think you'll see more and more products take advantage of the OLE 2 ITypeLib and ITypeInfo interfaces, and we'll provide a tool that takes that information and produces a Delphi interface unit.

Q. The ability to create a single EXE for redistribution is very attractive, but somewhat spoilt by the need to include the BDE with database applications, even if they only want to access the odd DBF. Any plans to clean this up?

A. We're working with several third parties, including SAX Software, Eschelon Development, Sterling Software, Great Lakes Software, and Shoreline Software. They have, or will soon have, products to help you deploy your Delphi database applications. In addition, we're making a deployment kit available, via CompuServe and Internet.

Q. Also on data access: is it possible to modify/inherit from the data access controls to provide, for example, 'native' access to FoxPro/Clipper databases? If so, are any such products being developed by Borland or Third Parties?

A. I know of several Third Parties working on native access to FoxPro/Clipper as well as B-Trieve. Some of them are in beta at this point. You can contact them for additional information on the DELPHI CompuServe forum or find out about them in the Delphi "Power Tools" catalog.

Just to clarify: Sax, Eschalon, Sterling etc do install/setup tools that work with Delphi and the Borland Database Engine. There are other companies creating components that provide direct access to BTrieve, Fox, and other database formats.

Q. What is Delphi's main niche in the developer tools market? Compared with, say, Visual Basic, PowerBuilder and C++?

A. Delphi is a general purpose Windows RAD (Rapid Application Development) tool. The point is that Delphi is NOT a niche tool. From the onset, we've designed Delphi to be able to take you from prototype to production, be that whether you're targetting a Client/Server environment or just writing a Windows application. I hear our competitors say that you can use their tool for rapid prototyping, and then port your app to C++ for production. But you know, rapid application development isn't really rapid unless you can go from prototype to production, all using the same tool! I also hear how competing products will address performance issues by generating C or C++ source code. This idea of building the application with one tool, and then having it generate C or C++ source files that have to be run through another tool, is ludicrous. How are you expected to debug the final code? Requiring users to find bugs in machine generated C++ code, and understand how that maps to the original 4GL code, just doesn't make sense. We've been shipping development environments with integrated compilers for 12 years--I think the time is gone when programmers would accept anything else.

Q. How have Borland's recent troubles affected Delphi's development and its take-up? Was the absence of a Language Reference Manual in the initial product a consequence of these troubles? (You had to mention the Language Reference Manual?  My mistake, I'm sorry! --Zack)

A. In the two years we were developing Delphi, the company did go through some difficult times. That was all resolved before we shipped. Now the entire company is focused on development tools, we've won the Lotus lawsuit, we've launched Delphi and Delphi Client/Server worldwide, and both products continue to sell well above expectations. In fact, I understand that Gray Matter reports that Delphi is the #1 selling development tool in the UK. The Delphi development team is 100% intact, and focused on Delphi for Windows 95.

We really underestimated the demand for the Language Reference Manual. (You're telling me! --Zack)  It will be included in Delphi for Windows 95. Meanwhile, we've made good by uploading an Adobe Acrobat version to CompuServe and our WWW page, and a printed version is now also available from Borland.

Q. We are all pleased you resisted the opportunity to christen a product Power Visual Turbo Pascal Objects for Windows - but how did it come to be called `Delphi'?

A. We actually tried to call it "Power Visual Turbo Pascal Objects for Windows", but that name was already trademarked :-). One of the senior guys in QA (Danny Thorpe) dreamed up Delphi as a code name quite early on, and everytime we did a market survey of product name candidates, everyone said "well, those are ok, but we really like `Delphi'". So in the end, we kept it.

Q. Which part of Delphi are you most proud of? ... and which part least?

A. The thing I'm the least proud of is probably the initial lack of a Language Reference Manual. But that's taken care of now. (Dammit, I said I was sorry already! --Zack)

What I'm most proud of is the fact that the energy we invested in foundation technologies like extensibility and exception handling enabled us to build Delphi in itself. Can
you imagine VB or PowerBuilder written in themselves? By building Delphi in Delphi, we really got to feel on our own bodies what was right about the product, and what needed fixing. I sometimes hear frustrated users comment "The programmers that wrote this %&##$ thing should be forced to use it themselves!". Well, we did, and we're really proud of the result.

-------- CHRONOLOGY --------

  • 1960 Anders born in Copenhagen, Denmark.
  • 1979 Enrolls at the Danish Engineering Academy. Co-founds PolyData, one of the first Danish microcomputer companies.
  • 1980 Releases his first Pascal compiler--a 12K Pascal subset in ROM for the British NASCOM Z-80 based kit computer. Eventually sells the rights to this product to Lucas Logic.
  • 1982 PolyPascal for CP/M-80 released. Product is now a complete implementation of the Pascal language.
  • 1983 Sells the Borland founders (Niels Alex Jensen, Ole Henriksen, Mogens Glad, and Philippe Kahn) on the idea of a Pascal compiler with an integrated editor. In November releases Turbo Pascal 1.0 for CP/M-80, CP/M-86, and MS-DOS. The newly formed Borland company, essentially penniless, places an advert for Turbo Pascal in Byte, bluffing the Byte Ad executives into giving them credit. The compiler is priced at $49.95 and is an instant smash hit.
  • 1986 Turbo Pascal 4.0 released, featuring an Integrated Devlopment Environment--the first of its kind for the PC environment--and introducing modular compilation (previously Turbo Pascal programs had to be compiled all in one go and could be no larger than 64K unless overlays were used). CP/M support is dropped.
  • 1988 Turbo Pascal 5.0 released, featuring integrated debugging and VROOMM (Virtual Runtime Object Oriented Memory Manager) overlay management technology.
  • 1989 In response to Microsoft's object oriented QuickPascal, Borland releases Turbo Pascal 5.5, which has its own OOP extensions. Microsoft later drops QuickPascal from its product line.
  • 1990 Turbo Pascal 6.0 features a new, much improved Integrated Development Environment, and includes the Turbo Vision object-oriented application framework.
  • 1991 First release of Turbo Pascal for Windows. Features a Windows hosted IDE and the ObjectWindows Library (also known as OWL).
  • 1992 Borland Pascal 7.0 includes both a DOS and a Windows hosted IDE, and allows developers to target DOS, DOS Protected Mode, and Windows.
  • 1995 Delphi and Delphi Client/Server released on schedule on February 14th.

--------

This interview originally appeared in EXE, the UK's leading programming magazine. If someone can send me a .EXE cover or logo, that is appreciated!
Thanks to Ben Riga for sending it to me. Thanks for also not mentioning the Language Reference Manual.


Congrats to Duo Security!

Zack Duo Cisco 1
Congratulations to Duo Security, which announced that it is to be acquired by Cisco Systems for $2.35b. This is a great outcome for all involved, and I'm very proud of what the team has accomplished.

I worked with Duo for about three years, initially as an advisor and ultimately as Chief Operating Officer running Sales, Marketing, Products, Engineering and Services. I helped grow the company from around $7m in Annual Recurring Revenue (ARR) to about $100m. The company has continued to grow to 700 employees, 12,000 customers and revenues that I estimate could exceed $200m ARR by year end, based on prior published numbers.

Duo is the fastest growing company I've been a part of; faster even than Zendesk or MySQL. When a company grows this quickly, it becomes a different organization every year. The early sub-$10m revenue company is quite different from where Duo is today. I would always tell new employees to be prepared for change. What remained constant was an underlying set of midwestern values of hard work, customer care and innovation that made Duo special. (Also we had a really good fun band called "Louder Than Necessary.")

It's a testament to the founders' vision and the management skills of the leaders we recruited that the company scaled so well. I remain especially proud of the many people we hired, promoted and developed to become the future leaders in the company. As news of the acquisition came out, many people have asked me about the deal, so here are my thoughts...

First of all, this should be recognized as an absolute success for the management team. To grow a company to this size and value is very rare. Less than 1% of venture-backed companies get to a valuation of $1 billion. It's also one of the biggest software successes in the midwest and proves that you don't have to be in Silicon Valley to win big. (Duo has in fact expanded into San Mateo, Austin, London and Detroit. Part of Duo's success is due to these multiple locations, but that's a another story.) 

Secondly, this deal creates a larger force in the industry. There is no doubt that Duo could have proceeded towards an IPO in 2019; they had in fact hired several new executives to lead these efforts in recent months. But the combination of Cisco and Duo together is more significant than Duo on its own. There has been a large amount of consolidation in the security space in the last few years and I believe Cisco will emerge as a leader. They have the respect and attention of Global 2000 CISOs and CIOs, a strong worldwide sales machine and a large number of related products. In a few years time, it will be clear that Microsoft isn't the only company that is capable of reinvention. 

Duo-arr-growth-chartThirdly, this represents an opportunity for further growth for the team at Duo. Cisco plays on a larger global stage than Duo could on its own. But Duo's executives, managers, engineers, security experts, sales people, support engineers, product team and marketing organization have a lot of mojo to contribute. Duo has become one of the fastest growing SaaS companies on the planet and they know a thing or two about making security easy. The company has a Net Promoter Score (NPS) of 68, one of the highest in the industry!

And that is the key to why this deal makes sense to me. The combination gives Duo scale. But it also injects speed and innovation into an industry that needs it. The old approach to security, locking down your network with a VPN and using old-fogey security tokens doesn't work when your applications are in the cloud, employees are mobile and hackers are targeting everyone and every thing. I believe Duo is well positioned to lead with a modern new approach to security.

There's also a fourth point, which in the long term could become even more significant. Duo's success injects a large amount of capital in the Ann Arbor / Detroit area. The company has also developed tremendous expertise in building a SaaS company at scale. That combination of capital and talent will result in the creation of additional startups in coming years. Duo's investors (Benchmark, GV, Index, Redpoint, Rennaissance, True Ventures...) did very well and are likely to be open to investing in new startups in the region alongside other firms focused on the midwest such as Drive Capital, eLab Ventures, Steve Case's Revolution, RPM Ventures and others. This acquisition shines a spotlight on Michigan's growing tech scene and that will have all kinds of positive impact on investment, innovation and job creation.

To all my friends at Duo, this is a vote of confidence in all that you have created. I wish you congratulations on achieving this milestone. Now there's an even bigger opportunity to take this product line, security expertise and company culture to a bigger audience than we ever thought possible.

Go Duo!  

 


Are Tech IPOs Back in Fashion?

2017 ipos

Much has been made of the slowdown in tech IPOs in recent years, but that trend appears to be changing in 2017. Of course, there are a few mega-companies that continue to sit on the sidelines (AirBnB, Uber, DropBox, I'm looking at you!) but I think we will continue to see improvements in 2017 and 2018. Perhaps not as strong as the record number of IPOs of 2014, but likely enough to reverse the declining trend from 2015 and 2016.

Early this year we saw IPOs from the likes of Snap, Mulesoft, Aleryx, Okta, Cloudera among others. Other than Snap, which was rather over-hyped, most of the others had very good returns for their investors and are continuing to trade above their IPO price. And overall multiples for tech companies on NASDAQ and NYSE are holding steady. I'm especially encouraged by the performance of B2B software companies Mulesoft, Okta and Cloudera. Mulesoft now has a market cap over $3b and Cloudera and Okta look likely to cross that threshold later this year based on their steady growth and increasing efficiency. It looks like B2B stocks are once again in fashion.

My expectation is we'll see a bit of an IPO slowdown during the summer and then a significant uptick in the fall. For B2B SaaS companies getting to $100m or beyond in annual recurring revenue (ARR), this will be an interesting time.


2016: Down Rounds & Layoffs

Nasdaq - Box Twtr Etsy APIC

If you thought 2015 was a rough year for the financial markets, you ain't seen nothin' yet. So far, 2016 has every sign of being a full-on bear market, meaning a 20% or more decline in the major stock markets. 

And no surprise, we've seen a ton of bad news for so-called Unicorns --tech companies valued at over a billion dollars. Unfortunately, many of these companies have failed to build an efficient, profitable business to maintain their lofty valuations. No surprise, companies are seeing their public and private valuations dramatically reduced. Here are a stories that have surfaced in recent months:

The point of all this bad news is that it's not just about one or two companies that have missed the mark. It's a systemic problem. And likely, we are still just seeing the early signs of what may become more common in 2016.

To be sure, there are plenty of strong companies out there whose valuations are well justified. Companies like AtlassianHubspot, New Relic, Zendesk all have efficient business models and disciplined growth. 

But a lot of other wannabes have billion dollar private valuations that are much harder to justify. It could be that the market correction in tech is just an adjustment to valuations that were never warranted in the first place. There are lots of cool apps, devices and services out there, but it doesn't mean they are good businesses. If a company gets to $100 million in revenues and has no clear path to profitability, it's kind of a fool's errand. And many of these companies haven't even gotten that far. 

WSJ stock charts Jan 2016

So what does this mean for startups? Basically the lofty multiples of 2010-2012 are gone. The truly great companies will still command good multiples, but only if they are converging towards profitability. The "growth at all cost" land grab strategy that inspired young companies to burn millions or tens of millions of dollars per month isn't going to be viable in the current climate.

If you're in a company with less than 12 months cash burn, you better make sure management is reducing expenses. If you don't have an increasingly efficient growth story, this is going to be a tough time to raise money, so expect a down round. If you can weather the storm without having to raise additional capital and grow back into your valuation over time, that's not a bad way to operate. 

 

 


Silicon Valley Overheated?

Silicon valley lightbulb

Don't Worry - Winter Is Coming

There's been quite a bit of press recently about whether there's a tech bubble or not. Certainly, things are overheated in the valley. Traffic is out of control, competition for talent is fierce and there are definitely some companies with billion dollar valuations that seem, well, a little suspect. If HBO's series "Silicon Valley" is meant to be a satire of the worst in tech, it's remarkably close to the truth in some areas. I think the show is funny, but almost every farcical element in that show seems way too close for comfort.

One of the things newcomers to the industry forget is that, like most sectors, tech is a cyclical industry. There are boom times when stocks seem to just go up and there's unlimited demand for IPOs and then, well there's the opposite. That's what happened in the early 1980s, more severely in 2001 and again in 2008. There have been a couple minor corrections to sky-high SaaS valuations in 2014 and 2015, but nothing like we saw in earlier downturns.

The companies I work with all seem to be in good shape. They've got plenty of dry powder, having raised money in the last 12 months and generally are increasing their efficiency in terms of customer acquisition costs, cash position, etc. But for companies that have less than 12 months of runway, there could be problems. Like, a going-out-of-business problems. As is occasionally reported by new CEOs who run out of money, there's this tricky thing called "burn rate."  Which basically means, you should make more money than you spend. Really, it's not that complicated. 

When you build a company, you can't just optimize for growth at all costs. Otherwise, those costs can easily exceed what you're bringing in. And when the piper changes his tune and VCs and Wall Street decide to no longer value money losing companies quite as optimistically as those that are generating cash, it can lead to some pretty ugly situations. Just take a look at the downfall of GetSatisfaction, Fab, Zirtual or even Good Technology, which managed an exit, but at less than half it's billion dollar valuation.

GoodTechnology was around for almost 20 years and raised $290 million (!) over six rounds. The company talked about doing an IPO back in 2013. They filed for an IPO in 2014 and then amended it in 2015. Sadly, their growth started to decline while losses continued to mount. They lost $95m on revenues of $158m in revenue in 2014 leaving them just 7 months of runway in 2015.

Good-financials

Presumably Good couldn't raise more money and couldn't do an IPO in the current market. So they got acquired by Blackberry, a struggling company if there ever was one. While some of the investors and execs will have made money, I doubt the rank-and-file employees got much out of it.

It was probably lucky for Good that they didn't complete their IPO. While strong SaaS performers like Hubspot, Zendesk, New Relic have done well with steady growth in their revenues and share price, there's an increasing number of tech companies trading well below their IPO price including Alibaba, Apigee, Box, Castlight Health, Etsy, Twitter and others.

Bill Gurley from Benchmark Capital has been a particularly strong voice reminding startups that "Winter is coming"; valuations are being compressed and CEOs need to make sure they have a path to profitability.  Words to live (or die) by.


Open Source Enigma Project

Open Friggin' Enigma

The wild and crazy guys over at S&T Geotronics, James Sanderson and Marc Tessier, have decided to go full tilt with a Kickstarter version of their DIY Open Enigma Project.  For those who missed the fanfare last year, they were featured on Instructables showing how to build an Arduino-based encryption machine that works exactly like a WWII era Enigma.  You know, the thing that Alan friggin' Turing and his team at Bletchley Park cracked to  bring an end to WWII?  Yeah, that Enigma.  

The Enigma was also featured in the aptly-titled novel "Enigma" by Robert Harris and the film starring Kate Winslet and some people I've never heard of.  That film was produced by Mick "code-breaker" Jagger.  Yeah, that Mick Jagger... By the way, Jagger owns his own personal friggin' Enigma machine.  How cool is that?  

The Enigma Machine (and it's cracking) remains one of the most significant breakthroughs in computing.  And Turing is considered one of the fathers of modern computing as well as a brilliant mathematician, logician, code-breaker and... wait for it.... world class marathon runner. (I kid you not, the guy ran a 2:46 marathon, coming in 5th in an Olympic qualifying round.  Take that Nazi scum!)

But unless you happen to have a spare $208,137 lying around to throw at a Christie's auction, the closest you're ever gonna get to an Enigma machine is to view Mick Jagger's Enigma sealed behind glass at Bletchley Park.  I've been there, it's fantastic.  But it's also heavily guarded.  Just sayin'. 

Enigma kitNow with the Open Enigma Project, you can get a working, life-size replica of the Enigma and be a part of computing history.  You can sponsor the Kickstarter project for as little as $5 (cheapskate), or if you're a DIY hardware hacker, for $250 you get a bag of electronic stuff you can assemble. 

Or if you're a software person who wouldn't know which way to plug in a soldering iron, then you can get a fully assembled kit (without a case) for $300.  And if you want the whole enchilada including the genuine wooden case, it's $600.  Executives, VCs, rock stars and others can splurge for even higher levels to help make this project a reality. 

This is literally a once-in-a-lifetime opportunity to get a working Enigma replica.  And that is some cool cyber-encrypting steampunk goodness!  You can plug the Open Enigma into your PC via USB port and run some kind of crazy distributed big data bitcoin-mining NoSQL social media photo sharing site on it.  

All the hardware and software is open source so you can compute all you want on your desk, put it behind glass or run a marathon with it.  Just like Alan Turing would have done.


Ellison Buys Hawaiian Island

Lanai

Yes, it's true, Oracle CEO Larry Ellison has bought 98% of the Hawaiian Island of Lanai for $500m.  The island was previously owned by David Murdock who bought Dole in 1985.  The island has 3,200 residents, two luxury resorts, two golf courses and is 88,000 acres (141 square miles) in size. 

Here are some alternate headlines & subheads

Ellison Buys Island of Lanai
Declares Bill & Melinda Gates Marriage to be in Violation of Oracle Terms of Service

Says "What Island?  I only wanted a closed off veranda.  Doesn't anyone listen to me?"

Thanks Safra Catz for Really Screwing Customers in Q4

Praises Gates & Buffet for Curing that Malaria Thing

Cost less than Buddy Media Anyways

Plans on Burying 3 Tons of Obsolete Sun Hardware

SAP Announces Intent to Acquire Alcatraz for $600m

Tells 3,200 Residents to Get Cracking on Oracle 12i

Bans Open Source Software

Still Angry About Getting Outbid on Yammer 

To be Renamed LarryLand

Reminds Facebook Millionaires Not To Be Too Flashy

Passes Law Banning Rival Marc Benioff From Ever Wearing Hawaiian Shirts Again

Expresses Enthusiasm for Pineapples and Local Super Skunk


GigaOm Net:Work Conference - Dec 9

Network-2010-logo

I only recently found out about GigaOm's upcoming Net:Work conference.  It's held December 9 at UCSF Mission Bay conference center.  While the name of the conference is a bit ambiguous, the actual area of focus is very clear: how will we collaborate in the 21st century?  

The impact of smartphones, tablet computing, social networks, Software-as-a-Service and Cloud computing is just starting.  As a result, I think there are tremendous opportunities for startup companies to disrupt existing markets with more modern, lightweight applications that foster collaboration inside the company as well as with partners, vendors, consultants and customers.  

Companies that can more effectively tap into talent within their organization and across traditional boundaries may end up having a significant competitive advantage.  Instead of the traditional top-down view of management edicts flowing from HQ to employees and field offices, you now have the potential to develop, test and refine ideas from any part of the company or community regardless of location.  

That was the approach we took at MySQL and it worked very well with employees distributed in more than 40 countries, 90% of whom worked from their homes.  We also had a huge community of users we could tap into that contributed tremendous value to the company.  Even though we had primitive tools for collaboration (IRC, Skype, Forums, Wikis, conference calls, mailing lists etc), we always operated with a global perspective. This enabled us to develop great talent regardless of location.  Managing a distributed organization is not easy, but you get some amazing benefits if you do it right.

Speakers at the conference include Marc Benioff (Salesforce.com), Dave Hersh (Jive), Maynard Webb (LiveOps), Tom Kelly (Moxie Software), Doug Solomon (IDEO), Zach Nelson (NetSuite), Aaron Levie (Box.net), Ross Mayfield (SocialText) and more.

Also, thanks to Skip Hilton of GigaOm, there's a 50% off registration coupon: HILTONNETWORK50


MySQL Sunday at Oracle Open World

Oracle_mysql_hq
  

Looks like Oracle is continuing to invest heavily in MySQL and the storage engine eco-system.  They've announced a full MySQL Sunday at the upcoming Oracle Open World Sunday September 19, in San Francisco.  Registration is only $75 which is a bargoon.  I expect this will be bigger than any MySQL conference held to date.  And there's also the JavaOne developer conference and the rest of the Oracle Open World show.

Ok, technically things actually start at noon, but knowing the MySQL crowd, I am sure there will be parties that go well past midnight.  Helan gar!


Piper Jaffray on the Cloud

Piper Jaffray has published a 300+ page study on the cloud computing industry based on a recent survey undertaken of 100 CIOs. Bottom line, cloud computing is expected to grow significantly over the next five years. 

    Survey respondents expect the mix of cloud computing to escalate strongly to 13.5% in five years. This equates to a five-year CAGR of 19.2%, or 23.9% when we also incorporate IDC’s forecast that total software budgets will grow 4.7% annually. In other words, software spending will grow gradually in the next five years, but the mix of spend allocated to cloud-based applications will likely surge rapidly. Another way to think about the data is that the Cloud Computing market is expected to grow five times as fast as the broader software market: 23.9% vs. 4.7%.

If anything, I think the prediction is conservative and the impact could be much larger in magnitude when mainstream adoption occurs.  But the risk is that adoption takes longer, just as it did for open source software.  And as the report indicates, open source is powering much of the cloud computing that's going on:

    The next-generation Cloud Computing data centers are NOT running Microsoft Windows; they are increasingly leveraging the compelling economics of open source components. For example, the data centers powering Amazon, Google, and salesforce.com all run on Linux and other open source technologies. In fact, Red Hat’s operating system and the MySQL database are key components to many of the leading-edge Clouds being developed today. 

Why is this occurring? Because open source leverages a global community development process which results in a product that evolves rapidly, provides transparency into the source code dynamics, and surpasses other products in terms of security and reliability – all at a lower total cost of ownership (TCO) than traditional offerings.