Decision Makers are asking stakeholders – “Why not Open-source?”, when proposals for warehousing, analytics and sciences involve purchase of proprietary software. Apache Software foundation backed big-data platforms and Hadoop in particular have managed to solve critical problems of – Streaming, Enterprise Data Warehouse, Analytics and Storage at a fraction of the cost of ownership of proprietary software, in a distributed fashion over commodity hardware. Enterprises are increasingly deploying open-source software and purchase support to ensure smooth operations.
Market Hortonworks (HDP) and Cloudera (public: CLDR), are leading Hadoop support providers, catering to 2000+ enterprises including Fortune 50 companies. Revenue guidance of Hortonworks for FY-17 is USD 250m and USD 350m for Cloudera, in net new and renewal revenue, at a YoY 35% growth, confirms Hadoop adoption. BFSI, Retail, Healthcare companies contract expensive troops L1/L2 support additionally. Cloud Infrastructure providers such as Amazon,Google, Microsoft also have advanced hadoop based big-data offerings limited to managed infrastructure. Currently, 80% enterprises using Hadoop are on-prem, while hybrid deployments including cloud, will gain prominence by 2020.
Support demands are becoming near realtime, interconnecting open source components make great solutions but simultaneously add multiple fault points which create several support cases.
● Video-on- demand wants to know why Spark applications for recommendations haven’t worked in the last hour? ● Car-insure wants to know why Hive based daily reports are not generated yet? ● An IoT based insurance processor wants to know why logs are not processed?
Answers to such questions in a distributed environment, with cutting edge software such as Yarn/Cloud and Dockers cannot be answered by traditional support focuses on workforces, training and processes and not on advanced technical solutions.
Enterprises need an end-to-end solution for capacity planning, issue identification and rule recommendations for correction in near realtime. As time-critical business processes move to Hadoop and other opensource compute platforms, more logs and dashboards cannot fulfil missing need for intelligent and contextual solutions that enable – proactive indication of failures and reactive determination of incidents in real-time.
Stay tuned for Accelo announcements.
This entry was posted in Product, Uncategorized and tagged APM, Big-data, CLDR, Hadoop, HDP, Hive, ML, Spark, Zeppelin. Bookmark the permalink.