Tim Tow of AppliedOLAP is one of those guys you know in the industry that reminds you that excellence is its own reward. He has been one of the go-getters that I've always admired working with and knowing. We spoke together on the phone today for the first time in years and his baby, Dodeca, is all growed up.
As part of our strategic marketing at Full360 we will begin developing vertical markets for our data platform and go after those customers. After a long time of feeling rather sorry for Essbase and the EPM part of our business, the progress of Dodeca and its customer successes have warmed my old cold cockles. I've taken some notes about the things I most specifically need in Dodeca for a customer or two I'm thinking about. But here's the turning point. Dodeca has proven to me today that the true spirit of Essbase has not been lost and buried in the competition for the market domination of Oracle Middleware, which has been a battle royale for mid-tier technology in the enterprise market for years. What did Dodeca do today? It convinced me once and for all that Essbase can dominate with a single fat client reporting platform based on the world-beating concept of the spreadsheet. It's all the front-end Essbase will ever need.
For many years, there seemed to be nothing capable of beating the simple, pure combination of Essbase and the Excel Add-In. It was simple, fast and powerful. Then came a bunch of big front-end integration ideas that made sense on paper and passed muster of integration test, but lost the spirit and speed of discovery. Then, before ASO, Essbase itself was starting to show signs of fatigue in dealing with larger higher order dimensional models. And once that was overcome, came the problem of drill-through, which still had some security problems four years ago, not to mention timeout issues with APS when the data got big. Today, I believe firmly that a one-two punch of EAS driven Essbase and Dodeca's current version is all anybody on the planet needs for that holy grail of multiuser-concurrent-multidimensional read/write data with security down to the cell level. Ladies and gentlemen, welcome to the return of speed of thought, brought to you by C# fat client .NET brilliance. This is not a slow clap. This is a little bit of giddy excitement, and I'm really too old for that.
While Full360 has been plunging the depths of new paradigms of data management, parallelization and asynchronous data streams, real-time processing and massive query spaces, we've lagged a bit behind on the current comings and goings of the datamart world. In other words, we've been fulfilling our mission of being cloud-focused data experts. Now with some renewed spirit, our longtime friendship and partnership with ApplieOLAP brings to mind a lot of interesting possibilities.
So.. to wit.
- Essbase Java API
- All versions of Essbase since 6.5.3
- JDBC support
- custom starting points by user
- xcopy for .NET deployments to Citrix
- APS and/or straight TCP connection to Essbase
- MySQL, MSSQL or Oracle repository, no special server-side scripting
- Highly scalable to thousands of users.
Dodeca is the best way to get fast data out of Essbase. It is extremely configurable in ways that serve user requirements over technology requirements. It is a .NET based product that takes a bottoms-up approach to getting Essbase multidimensional data into grids. It has extended the ability of Essbase to deliver tens of thousands of rows at the old 'speed of thought' speeds. It is a managed spreadsheet environment under complete administrative control with hybrid drill-through capabilities.
Dodeca is entirely spreadsheet-centric and is exhaustively so. It uses highly specific defined data areas to perform multiple multi-sourced retrieves from data sources.It has control of all events both local and specific to Essbase API and remote JDBC operations. This allows a multi-layering, multi-query ability to draw data into the spreadsheet exactly where it is needed. Dodeca can employ any of the 300+ Excel functions to manage front-end calculations.
AppliedOLAP has re-engineered the spreadsheet interface and allowed Essbase to work to its full potential writing with C# that which used to be VBA. The product is clearly superior to all other Essbase front-ends. If you want to work with data in the spreadsheet paradigm, it is the gold standard. Dodeca's performance is remarkable and handles 100,000 row retrievals with ease. Its ability to cascade data into multiple tabs of a spreadsheet is beyond anything I've ever seen.It has managed both to allow highly complex database operations as well as keep basic ones fast and simple.
Mastery of Dodeca will take time. It clearly is aimed, for more sophisticated applications, towards .NET programmers comfortable with that level of detail and control. If you have complex workflow rules and highly detailed report requirements, you're going to have such people in IT anyway. But I see every indication that a quickstart implementation and its ability to use basic Excel templates support rapid prototyping any VBA competent guy can pull off with ease at smaller companies. No doubt AppliedOLAP with its long and deep roots in the Oracle customer community has plenty of third-party developers at hand.
I am impressed with Tim's direction in making a tool that, with its flexibility, makes for simpler implementation leveraging the dynamism of the Essbase query space, rather than fill up servers with canned reporting each with a single query statement. It's a win. And suddenly Essbase is way more interesting again.
This is my blog, but it's also a kind of unofficial blog for Full360. Very interesting things are going on.
About a year ago, at a hotel room in the Venetian Hotel, we began a discussion about what we wanted to build into the Full360 elasticBI platform. We have been very successful. This year, we've had more time and people put into that process and in consideration of what we want, what our customers have had us build and what's happening in the market, we have a very different outlook than just a year ago.
It's really time to say that BI is just a part of what we're all about. It's a big part, but..
I think the simplest way to understand is to know that we have responsibility to funnel information to executive decision makers. That means, as systems integrators, we get to know a lot about all of the different kinds of source data in enterprise data flows. We have made it our business to re-architect enterprise paradigms to the AWS cloud. Our success in understanding and implementing that has far reaching consequences for what we can do, and therefore who we are.
This year at AWS Re:Invent, Amazon introduced several groundbreaking products, not the least of which is their new BI tool, QuickSight. As we have been talking to people and prospects, we've heard them say 'slice and dice, bar chart blah blah blah', and we get that. So we've always said that we are the cloud architects on the back end, which we are. We thought that by saying, "We are next-generation data warehouse', that it would be quite enough to distinguish us. It does but only when you start looking closely at the technology and what we're doing with it. I'll fill you in on that later. The point here is that our DWAAS platform and methodology is forcing us to invent new terms that the market can understand. I am continuing that responsibility. Fingers crossed.
Our experience is that we have been two steps in front of Amazon and four steps in front of the industry. This year we are only one step in front of Amazon and expect that they will start formalizing products around technology integrations we have done next year. What is that technology integration? Let's call it ETL. We have taken ETL to the next level - higher performance, lower cost. But we haven't launched it as a product, rather made it available as a service. Everything we do is a service in a systems integration and managed services framework. That's how we do business. We are focused on function in the context of the AWS cloud environment and architecture. That's not software you load on servers so much as it is API calls in an highly customized environment configuration. The bottom line is that with the elasticBI Framework we can get data from complex source to high performance destination at a scale and speed unthinkable in traditional architectures. While I'm on that point let me describe Multi-Tier DW.
Multi-Tier Data Warehousing is what we all our practice of combining multiple data storage and database technologies in a single analytic application. Imagine this example. A Full360 client has a website that generates 1 million unique customer hits a day from customer interactions. We use our Dragonfly realtime rules engine to score these customers as they use the website and add new attribute data at peak rate of 30,000 REST API transactions per minute. This is powered by VoltDB, our fast data in-memory database partner. This data is staged into a multi-node Vertica cluster that integrates internal financial data and industry benchmark data purchased from a third party. Additionally we have added custom widgets to the website to capture additional data. It is made immediately available through our custom IOS app to mobile employees around the globe. History is automatically archived to S3 for low cost high availability and then efficiently batch loaded into Amazon Redshift to process billion row queries on demand. We serve this data on one or more of our BI partner tools, perhaps Tableau, perhaps TIBCO Spotfire, perhaps Jaspersoft. That's a multi-tier DW solution.
But notice how we have extended back by providing our own microservices in our client's website. Notice how we have extended forward into our client's user base with IOS apps. Notice how with Dragonfly we have added a new dynamic dimension of data to our client's CRM. Now we could stop there and say, as a systems integrator we have demonstrated a larger aegis of control. We could stop there and say by employing in-memory, object-store and column store data management we have evolvled into next-gen Data Warehousing. But that would be incomplete, because we have done more than that.
So it leaves us with something of a marketing problem. Let's talk about the more.
One of the first operating principals of Full360 was that we are a DevOps company. We have reduced the cycle time of our data management. We can move applications from development to production in a matter of hours. Need to double your database staging mechanisms? Right away, sir! One of the things we do in our standard delivery is employ Cloudwatch to not only monitor server performance (remember everything isn't a server these days) but service performance. So we monitor the latency of the data process and can dynamically scale when necessary. We know when our customer's data sources are choking long before our customers do.
Another principle is that we will use Open Source and be vendor agnostic using the best technology and best architectures available for our client's projects. Architecturally speaking, nobody holds a candle to AWS. Azure might be worth consideration in a couple years time, but right now we're all about AWS. They are thought leaders as well as innovators. For Full360 that means we incorporate excellent technologies like Hashicorp's Vault, Terraform and Consul, to help secure, configure and communicate in the elasticBI platform. For our virtualization, we've been using combinations of Vagrant, Docker, veewee and Virtual Box. For our orchestration we've been using Chef and CloudFormation and Terraform is next.
This is just the beginning, but I need to keep this blogpost bite sized. I'm going to help you understand in the short term.
I've been rather interested to see how the media has reported on 'working conditions' at Amazon, the notoriously data driven business. Like all partners with Amazon Web Services, I'm always glad to see them get more press. The inevitable benefits of cloud computing architecture are becoming more well known to new industries every year. At the core of the kind of work we do, Business Intelligence, these benefits include allowing downmarket businesses afford the kind of IT environment only available to multinationals just a few years ago.
But the NYT article (behind a paywall of course) 'Inside Amazon: Wrestling Big Ideas in a Bruising Workplace' strikes the wrong tone. I'll simply sum it up like this, the article and many like it are anti-meritocracy.
I am working with some of the smartest hard-working people ever in my career, and we are a small agile company. We keep thinking of ways to innovate in our particular business and our CTO is notorious for keeping us on our toes by introducing new technologies, fresh from open source development worldwide. And still, we can only keep about nine months ahead of what Amazon makes virtually infinitely scalable. This is unprecedented in computer science. At Amazon, information technology goes essentially from research to global industrialization in about two years. These are space race times.
I'll tell you what Amazon's reputation is. It's that if you have a masters degree in computer science and 10 years experience you will make about 125K per year plus full benefits and you will work hard. In other words, you'll have a great job with an awesome company on the cutting edge of technology, but you'll never get rich. You're better off working in some small, young company where you'll be the smartest dude and closer to the top. That way you can do some fraction of what Amazon does, get stock options and make money on the exit strategy. In other words, the way to do it is to let VCs throw ridiculous financing at your hot, trendy Silicon Valley startup, take the money and run.
Having been in this segment of the computing industry for nearly three decades, I can tell you how I have been continuously surprised by the inability of many companies to do what makes sense. If that weren't the case, Dilbert the cartoon would seem alien and non-sensical. Instead, Scott Adams has nailed the kind of mediocrity and dysfunction that plagues the best of our businesses not only in America but worldwide. You don't have to be a revolutionary to make improvements in business. In fact, you don't even have to be a 'data scientist' to enable data driven decision making. You simply need to know what to count, and build a system that counts it for you. Deliver those numbers to the decision makers and you're on your way.
It might seem odd, but the best businesses are already data driven, it's just that the data is not necessarily in digital form. If you're a bus driver, you have an idea if you are having a busy day on your route. When you stay in business, you know what you need to do to keep your head above water, you know where your nose is, but not exactly how many centimeters above the waves. Digitization of these metrics helps your organization under certain conditions, and this is what we identify together with our customers. We make a data model that follows the business model. We enable more people to see what the leadership sees. We socialize the metrics of the business.
Obviously, Amazon has been doing this and clearly the details are not shared with the editors of the New York Times. So it shouldn't be surprising that we hear from the outliers in their seat-of-the-pants description of the waves that overcame them. But that's human nature. We listen to outliers. Part of what we try to do as data experts is separate the signal from the noise, help identify if outliers are just loud squawking pests or the coal mine's actual canaries.
We are always happy to discover with our clients what kinds of accurate instrumentation we can provide by harnessing the data at their disposal. Every business can be data driven, it's just a matter of disciplined collaboration with us at Full360.
Several years ago I read about something called limited liability identity. I thought it was one of the best ideas I'd heard in a very long time, so I wrote about it. I heard it from one of the partners at a company called Sxip. That company has died, but the idea lives on.
Recently I have restarted work on my sci-fi novel whose working title is 'Borky's Beach'. I would have liked to call it 'The Informationist' but that one's already taken. At any rate, it is about a panoptic, bioengineered utopian near future on the edge of collapse. I think it will be brilliant, but that's just me. One of the components of this future is LastID which is a combination of cyber-currency, panoptic marketing, federated identity and banking all based in part on the idea of limited liability identity. Well I talked to my co-worker at Full360, Rusty, over the weekend and he's telling me that a part of what this thing does is currently available.
So we're talking about two technologies that work to implement something that's actually very simple when you think about it. ApplePay and Vault are the two technologies. I'll describe it here with some language that I may repurpose for the book.
How it works is really simple. You have some fraction of your funds pre-allocated for impulse buying. So before you leave your house, because part of the security includes geolocation, you thumb your brick and tell it that you want to spend $100 over at the Panorama Mall. It generates some new 'credit card numbers' that are useful for about 6 hours, and then you just shop. After the time runs out, whatever is unspent goes back into your cash bucket. When you get to the mall you just pick up whatever you want in the stores. If you go over your impulse limit, your brick automatically checks to see if you can recategorize the purchase into your monthly budget. If you can, then it authorizes. If not, then you literally have to go back home and add cash. That's usually not a problem because you can put a hold on any merchandise in the whole supply chain for a small fee, and the sort of people who do this kind shopping without prepaid subscriptions are generally rich anyway. I mean why spend the time and money leaving your house to shop anyway?
So my new understanding is that Vault will work as an API that we can use to assign timed access and authorization to any resource under our management. If I want to authorize an ETL server to have access to a database server, I can provision it on demand. So this enables a new class of Lambda computing streams. Super cool.
ApplePay looks like a cool and crisp implementation of limited liability identity, which basically means what I said above. You don't have permanent credit cards, you have pre-authorized, expiring certs.Nice.
What I do all day is put everything in boxes. I am a compulsive organizer. (Yes I subscribe to Things Organized Neatly) My organized things are data and the boxes are systems. I'm trying to figure out the best way to put all the data into all of the systems, but at the end of the day I'm a tour guide. I help people find the data they want in the systems they can afford. Sometimes the boxes they want are too big, most of the time they don't have enough time or interest, or they find the whole matter too confusing and would rather just guess.
What's interesting is not the techniques and methodologies of building data systems and the attendant analysis. What's most interesting is understanding how it is that people come to understand the data they feel is most important to them. What do people want to know badly enough so that they'll pay me money to organize it for them? You're a sailor, what does a sailor want to know? You're a security guard at the front desk of an office tower, what do you want to know? You're the buyer for oak barrels in a distillery in Kentucky, what do you need to know to do your job better? You're a doctor for a major airline trying to figure out the rate of respiratory infections for flight crews. You're a car dealer that is thinking about fleet sales. These are all functions of running a business, or any operation that requires some stream of data that changes. A guy like me finds a way to turn a static pile of data consumed just once, into a reliable flow of information that informs your work over time.
It still comes down to counting things, figuring out percentages, risks, keeping track of assumptions, proving that numbers tie out in audits. It requires persistence, an obsession for perfection when people would rather be sloppy, a head for logic, and the ability to communicate what it all means. It means keeping track of a million moving parts and using compute & network tools to do so. It means being economical about which tools to use and why. It means listening patiently to people with half an idea. It also means never knowing enough because people are always unsatisfied.
It's a field that will always be around as long as somebody is building the machines, because people will always want to know something and we've pretty much figured out how to make them trust machines. TV paved the way. People will want to know then they'll change their minds and want to know something else. Human curiosity is not infinite, but it's pretty large.
This is an old story but particularly interesting.
1995, I was working for a paper products company in Atlanta. I cannot even recall the project's details except to remember that most of the employees of the company ran PCs without TCP/IP stacks and all of their networking was done by Citrix. Still, the project was going slowly.
I got a chance to meet with the president, whose family was involved in the business and his top financial analyst who looked like Lucy Liu and was deadly with a spreadsheet. It was clear that these two were the brains of the operation. We were going over boring bills of lading, literally shipping documents when suddenly the president got an insight. He picked up one piece of paper as we sat in the conference room and called a warehouse manager.
"Ralph, this is Sam at HQ. You know our customer X?"
"Yes it's our biggest customer"
"When did you last send them a shipment?"
"And how much did you send them?"
"1500 pounds. We send them twice a week, Tuesday & Thursday"
The president went on to discover that the warehouse manager used a flatbed stake truck that had a capacity for ten pallets and that 1500 pounds of product took up three. But the thing that caught his eye in the paperwork and that he knew was that it was customary to discount the freight cost when selling to your biggest buyers. This was a discount given at the discretion of the 85 warehouse managers nationwide. The freight cost was calculated per truck trip and tied to the cost of gasoline. Why make two trips with half a truckload and give the discount twice, if you could make one trip and give the discount once?
He had the financial analyst crunch the numbers and figure out how much the company could save by making fewer trips and/or not discounting freight. It was massive. I pulled the historical shipping records from the database. Within two hours we figured out how to save the company up to 1.2 million dollars per year.
The president asked me how long it would take to build a system that would use the spreadsheet formula against the shipping records at each warehouse. Two months, I said. At the time, however, none of the warehouse managers had networked computers.
Lessons Learned. 1. You need an executive with a keen insight to how the business actually operates. He knew how to read a shipping document, and he understood how warehouse managers work. Why they do things the way they do, how they perceive orders from HQ and what their financial incentives are.
2. You need to be able to make accurate models of the actual costs involved and prove them out before you build any systems. You can't just build a system that captures 'everything' and expect that it's going to tell you something valuable.
3. You need a business culture where you can push down responsibility for costs and revenues to the people who actually do the work, and show them in dollar terms, how a change in behavior affects the bottom line. If your system only has a few end users at HQ, so what? There's a difference between knowing the right answer and doing something about it. Knowing only took us two hours.
4. You have to have compute infrastructure in place at a low enough cost so that the rare insights of cost savings are worth implementing a system for in the first place.
In the case of that company, number four killed the whole deal. If you counted up the warehouse managers, the cost of upgrading & networking them per warehouse, the time to build the system and the licensing cost per new user, it would have completely offset the cost savings. The company had to end up issuing a memo and policy change, like an order from God instead of building the system to let the warehouse managers see their efficiency rewarded.
These days enterprise software takes a back seat to the cloud in terms of total cost of ownership. But like 1995, most companies are not forward enough in their thinking to rise to the leading edge of technology. Even so, technology is only one part of the equation in improving the business.
Where do I begin to tell the story? How about we go from the epiphany backwards? This morning I read this from Dave Sisk.
Hadoop is not even in the same ballpark as any of the CDBMS's that have been mentioned. It's not even a database, for that matter...it's a giant MPP ETL process...which is a great thing IF you use it as a giant MPP ETL process instead of trying to use it as a database. I've examined HBase closely, and looked at a colleague's implementation...it has to be the only key/value store that I can think of that is a worse piece of crap than MongoDB. (At least MongoDB is better than something.) My colleague's company's 60-node implementation of HBase (on big honkin' enterprise hardware) struggles to insert 2000 rows of data second (I can insert at 10-20 times that into PostgreSQL running in a VM on my friggin' laptop), and reports run for hours (sometimes days). You can do the same work in a good columnar RDBMS 2-3 orders of magnitude faster...as in milliseconds or seconds (minutes at worst), instead of hour or days.
At a prior company, we used Vertica to consume hundreds of thousands of rows per second, and could return results from billion row high-reduction queries in a few hundred milliseconds (from about 5 hefty nodes)...Hadoop/Hive/HBase with hundreds of nodes could not come within 2 orders of magnitude of that kind of performance, no matter how much hardware you throw at it.
That is what I've been waiting to hear for years. So I wrote back.
Finally thank you for helping me understand what I didn't, that there is a massively huge performance gap between Vertica and Hadoop. I've been wondering why I never meet Hadoop folks in my space (Business Intelligence) and I've been listening to the guys at MAPR tell me theories of why their Amazon-approved version of EMR is going to be a world beater. I have assumed that all Hadoop is simply a very large scalable file system (and I've started to call it HDFS) + some clunky tools that lack a semantic layer. But I assumed that a reasonably competent Java programmer (who wants to be that?) could make it perform at a good clip. No matter what the performance, I expected that it was mostly ETL class tech. BUT I figured sooner or later somebody was going to build a semantic layer of SQL onto it and then it would be serious competition for columnar DBs, primarily because of mind and market share.
However, if Hadoop is little more than a glorified giant file system and map reduce is to HDFS as regex grep sed & awk + perl is to ordinary file systems, then there's no way it will ever compete on performance and cost efficiency to columnar tech.
I'm going to assume HDFS is a data lake from here on out, unsuitable for BI queries.
That assumption means a great deal in my world and it erases a class of insecurity I've been having over every discussion of 'big data' I've been a part of for several years. You see none of my customers have Hadoop or ask for Hadoop. All of them can form the syllables. They know about Hadoop. Hadoop *is* big data, as far as the layman's world (and another world that is not BI). So I haven't had a *reason* outside of great curiosity to build anything with Hadoop.
First of all, I have S3. There's no bigger data lake I've ever needed. Terabytes are not a problem. I mean I've got terabytes at home. But under ElasticBI, I've got S3 whipped into shape and smartly integrated to every database I care about. (Vertica, Redshift, Essbase, VoltDB). So I never have to concern myself with running out of space for staging or moving massive amounts of data. It's always about database optimization. That requires structure, structure requires purpose, and I know how to get that from my customers. Hadoop is about unstructured data.
Now when I start talking about unstructured data, I mean web data that has a volatile structure. And in that space I see tools like MongoDB, Cassandra and CouchDB with Couch as the winner. I've heard horror stories about Mongo, and that Cassandra is a big tease that never quite gets all of her drama together. There's also Riak which is heavyweight champion in the space, so I hear and believe. But I'm not building ecommerce websites and I don't need to manage volatile content and serve up XML to be rendered, so I have no need for that class of data management. Not right now anyway. I want to be the master of all data management, data store and database worlds, but I have to deal with one continent at a time, and I can accept that Riak and Couch are on another planet. Planet Unstructure.
But when they say they're going to put on SQL clothing and take their big data + analytics into the Data Warehouse realm, that's war of the worlds. I don't like the prospects of that imminent invasion. Because, really, websites are the masters of handling 10,000 simultaneous concurrent users. I can't do anything like that. Those guys must know something.
Well, so did LAMP stackers at one time. I'm going to take a gamble and commit HDFS and all that manipulation to the back porch. It's not competition in BI, and it never really was, but that was really in my mind a matter of market focus. Now I understand technically that there are hard limits to what all of that can ever do and strong technical reasons why what I do with my database tech is not threatened by these other systems.
I still want to know. I still want to spend some hands on time and eyeball some NoSQL tech in the context of their own systems. I wish I knew a guy, personally. But maybe our paths haven't crossed for good reasons. Either way, I am stepping out of the shadow of the elephant. Hadoop is just a data lake fallback for collecting a lot of stuff that may or may not be map reduced into something coherent. It's a specialty ETL transform that we at Full360 will do with realtime streams. So maybe if my Colorado River data pipeline fails, it will form a Hadoopified Salton Sea while we fix the levee. What it most definitely is not, is Fast Data. Meaning it has no place in the IoT future and is an artifact of what we will inevitably call something along the lines of JBOD. Lakes are meant to be drained. And now I've completely introduced heterogeneous metaphors into this. But that's kind of how epiphanies work, neh?
I'm out of the shadow of the elephant who drinks up lakes with that weird trunk of his. Yay circus tricks!
If I were in charge of IBM this is what I would do.
First thing to do is get rid of PWC. You have to admit that these guys were hired simply because IBM software and processes were too complex for customers to implement on their own. You have to ask yourself how did Facebook succeed without an army of consultants? It succeeded because the software made things obvious. That is what you can do with advanced web design. So what IBM has to do is change the way they think about software and systems implementation. Here's my idea.
IBM cannot compete with Amazon, Google or Microsoft Azure in the pure cloud space. They are already too late to attract new customers and all They can do is convert their installed base. And if that's not giving that installed base a price break then they cannot improve the revenue. IBM cannot compete with Intel; the PowerPC chipset is dead.
So what IBM has to do is go after the fractured midmarket and leapfrog all the small competitors in that arena and consolidate that business. This will be all about cutting off the high end from Apple and Microsoft and then eroding the market share of SAP and Oracle. But IBM also has something that midmarket software vendors do not have which is lots of capital. This means that IBM can put cells physically in markets where small competitors are at a loss. More on that in a minute.
The place where IBM's going to make cash is by competing in broadband. Google fiber is going to be a competitor, But I think an IBM Cisco partnership would make mince meat out of Time Warner cable Comcast and seriously put a hurting on Verizon. Broadband is a cash cow. IBM can makes sense of new net neutrality work rules and lock out any other smaller players in this market.
So what does this mean? This means in any town with more than 50,000 people IBM will have a branch that branch will be like a WeWork / CoLoft presence. It will attract all the young programmers who don't want to go to places like PWC. It will give IBM first crack at rewriting all of the midmarket applications which would be no-brainers for these young ambitious programmers. Build next-generation software for the mid market and let them leapfrog all the legacy systems. This gives IBM new face, a cloud face, for all of their current AS/400 legacy systems. Mass migration. Lift and shift. IBM's first purchase? Workday. IBM's next purchase? RadioShack store fronts. IBM needs to keep its mainframe business for a time, but shed its other branded hardware server business faster than Dell and HP. Everything is apps. IBM has the advantage.
Put IBM into the mainstream of the midmarket and take it over. Build next-generation cloud capable applications with more customer responsibility then Amazon or Google are ready to take. Run that section of the business life PeopleSoft used to be. Serve coffee in the IBM workcenters. Cohost meetups. Make it your friendly neighborhood IBM. Get into fast cycle agile development of mid market applications that scale. Apps for business. Same approach as AS/400 with next-generation technology and a new wave of developers, excuse me, 'devs'. Here's the trick. Do an open source of IBM Research algorithms to the IBM cloud and a small license fee to off-IBM cloud implementations. Buy Raxspace if you have to and host a second kind of open-standards cloud. Two cloud models, one company. Only IBM could do that.
There's a question about salesforce.com. And that question is whether their API is sufficiently robust to compete with the APIs of AWS and Google. Here I'd make a side-bet on Apple Swift. I would bet that Java is dead. Let Oracle have it. But I would also make bets on Python and Ruby. I would also make bets on GoLang because it's a good thing for IBM to be totally open, however, Java has too much legacy crap that existed before the era of clouds. Make a fresh start.
At some point let's say four years from now. The functionality gap between the new midmarket apps and the Legacy IBM mainframe apps will become clear. My bet would be on the 80/20 rule that scaling up the midmarket apps will be cost beneficial to current mainframe customers. The 20% that can't migrate now pay a premium. The rest they bring over and lock out the rest of the market with lower prices. This will put pressure on SAP and Oracle and revitalize the 'enterprise' space. The enterprise space is now a web-beautiful, neighborhood-centric dev's paradise, and those young hungry programmers get a crack in SAP and Oracle at Microsoft and Google and anybody else who tries to enter the business app space. And oh by the way IBM's broadband will now be at an appreciable market share, in the new world of net neutrality regulation. ROLM, anyone?
McKinsey has several analysts who publish on the subject of 'digitalization'. The basic idea, to which I subscribe and find compelling, is that IT can be transforming the way that businesses interact with their customers and with the public.
Generally when you say 'IT' however, you get this (old) idea in your mind about a staff of managers and programmers supporting the internal network, enterprise class servers, 'critical' applications to a largely internal, intranet captive audience of employees and contractors. 'Digital' incorporates social media and industry standard reporting systems, even beyond CRM, which is about as far as most companies go to tighten their relationship with declared, paying customers in the marketplace.
So my POV with regard to Digital means the holy grail of transparency.
Imagine the nightmare scenario. Your company is on the witness stand or testifying before Congress. What does the entire planet want to know? They will always ask "Who knew what and when?" That is the most difficult question any organization can face, and it generally requires a level of systems integration that very few organizations can muster. However, it is fundamentally true for the most high stakes information games on the planet, with commodities & securities trading systems, with military communications, with first responders and with national security intel systems.
Knowing who knows what and exactly when in an auditable system, real-time, is the most advanced IT systems that exist - at least to my knowledge. Most businesses are light years away from that level of systems integration. McKinsey perceives that and knows that any business or organization that can advance their systems in that direction will have a huge advantage.
I try to make my living with this kind of thing in mind, and yes I do analytics. Let me give you another example. Plane crash. Right now, the capability exists to stream flight recorder (and other sensor) information realtime and have enough eyes watching so that anomalies can generate alerts and produce corrective action. However the airlines have not decided to spend time and money on making those advances. There's nothing a traditional IT organization could do to cost justify that. However a forward thinking organization sees the advantage of digitalization transparency.