译文 |“数据货币化”战略的五大要素

译文 |“数据货币化”战略的五大要素优质

当今,在认知计算时代下的数字化商业模型中,数据带来了新的收入流。如果一个公司能够高效地利用数据,那么认知计算学就能为其带来额外的收入流。 在大数据中,我们称之为“数据货币化”。数据货币化已经在全行业中掀起了改革的浪潮,提高了用户体验,使更精准的个性化市场和销售策略成为可能,还有效地防止了诈骗的发生。 大数据的兴起推动了各行各业的改革,大数据在成本优化和用户体验提高方面已经显出了巨大的作用,越来越多的公司发现大数据能够为他们带来新的收入流。 从银行业到电信业,从能源业到零售业,只要手握数据,这些公司就能创造出新的盈利点。这些行业都正在经历着数据价值“货币化”的过程,通过优化数据收集和储存过程获得了更大的盈利空间。 麦肯锡全球研究所的《大数据研究报告》显示,在创新、竞争和生产效率的发展前线上,大数据可以为客户端用户和企业端用户创造7000亿美元的价值。想要获得这一价值,就必须要在技术、基础设施、人力方面有足够的投入,政府也需要给予一定的支持。 1、发现目标客户需求、要求和期望 在利用数据挖掘利润之前,你必须先找准目标客户,并列出行业竞争对手,分析他们成功的原因。 以乐购(Tesco)公司为例,他们需要关注零售商和购物商场的运营情况,获取人们的购物活动信息,从而进行物流及库存管理和客户来源地区分析,因为这些分析需要基于真实的客户行为数据。 2、发现数据集特点——原始数据or修正数据?内部数据or外部数据? 数据货币化并非仅仅是储存和出售数据。数据货币化对数据分析过程、分析结果和合作伙伴都有一定要求。我们不妨组建一支集中化管理的数据科学队伍,与公司企业合作,分析不同数据集特征,探索应用案例,引进新的业务团队。 目前IBM与很多零售公司都建立了合作关系,这些零售公司用Hadoop和Spark整理数据,形成供给链实时报告,然后将报告卖给批发商。值得注意的是,这些数据不仅包括销售点的购买数据,还有从银行处获得的交易数据。 在Apache Spark和Kafka的帮助下,形成报告只需要数小时的时间,正确使用可扩模型可以将整体收入提高25%。分析这些报告可以为公司在客户区分和交叉销售分析方面提供很大帮助。 3、技术的合规性和合法性问题 分享数据时,人们通常会遇到数据被盗用的问题。因此我们应该建立明确的问责机制和准入门槛,遵守国家关于数据安全、隐私和自留责任等方面的政策,以确保客户不会对我们丧失信任,也不会触发任何法律法规的禁区。公司隐私政策必须言简意赅、通俗易懂。 4、数据服务与商业模型 要落实数据货币化战略就必须选择合适的商业模型,建立有力的战略联盟,找到靠谱的合作伙伴。 很多公司专门做高级大数据服务。如果这些数据公司能够为客户提供大量有价值的数据,那么就可以达到双赢的结果。 5、确立技术战略——Hadoop、Spark和IBM Watson数据平台开源技术为公司企业在数据货币化的发展中提供了有力支持。越新的数据,价值越高。Apache Spark和Kafka等技术都能为企业提供快速的实时数据分析,这 种数据处理方式和管理方式是前所未有的。 简单来说,这些改变都是为了一个结果——提高数据的灵活性。 理想的大数据环境是由开放标准驱动的,并且是鼓励合作的。Hadoop、Spark和IBM Watson等大数据平台可以为数据货币化战略奠定坚实的基础,帮助企业迅速地实现数据的货币化。英文原文: 5 key attributes of effective data monetization strategy In cognitive computing era, new revenue generation stream has emerged with data at center of the modern digital business model. One of the key capabilities cognitive computing enables for an organization is the ability to generate additional revenue streams by using data effectively. In the big data world we call it data monetization. The internal data monetization has already done amazing job at transforming business in all verticals by improving customer experience, enabling more personalized marketing and sales, deterring fraud and so on. The emergence of big data has shown to transform professions and industries. We are seeing big data doing wonders with cost optimization and enhancing customer experience. We are increasingly seeing a growing trend among our customers to create new revenue streams with big data. Customers ranging from banks, telecommunication providers, energy and utilities companies and retailers have potential to earn new revenues from the vast amount of data they hold. Each of these businesses are experimenting with different ways to monetize the value of the data they gather during their normal operations. Each are expecting to make considerable revenues based upon the difference between the cost of collecting and storing the data, and what the insights and outcomes can be sold for. As per the McKinsey Global Institute report on “Big data: The next frontier for innovation, competition, and productivity,” big data can create as much as $700 billion in value to consumer and business end users. Capturing this value will require the right enablers, including sufficient investment in technology, infrastructure and personnel as well as appropriate government action. 1. Identifying your target customers' needs, requirements and aspirations Before you embark on journey to make money out of your data. It is important you profile your target customers, verticals and their parameters for success. Case in point is telcos targeting retailers and mall operators with insights about anonymous movement of people throughout the property and surrounding. Delivering store or business catchment analysis based on real behavior, not just proximity to your location. 2. Identifying data assets—raw and refined, internal and external Data monetization is much more than just storing and selling the data. Data monetization is about making revenue out of data enablers like insights, outcomes and partnerships. Companies can benefit from a centralized Data Science team that partners with the business and potential customers by identifying data that differentiates, exploring use cases to solve, and helping to jumpstart business teams. One of the customer engaged with us is a retail company who is selling real time supply chain report to merchant wholesalers. The company is using the data from their Hadoop and Spark cluster to generate revenue-driving reports for wholesalers. The key parameter here is blending of purchase data from POS with transaction data from banks. With Apache Spark and Kafka, they run these reports in just hours, and with the scalability models in place they expect to grow this business to 25% of overall revenue. The analytics from these reports help merchants with customer segmentation, cross-sell analytics, and more. 3. Addressing regulatory and legal issues with technology. How you share your data is about balancing needs to innovate against the risk of using your data. Strike that balance with clear responsibilities and pragmatic access, enforce compliance to data security, privacy and retention policies and processes to ensure continued trust by consumers and meet regulatory and legal requirements. Company privacy policies must be clear and well-understood by overall business and technical team. Access should be determined by the use case requirements and priorities. 4. Data as a service and business model Operationalizing your data monetization strategy calls for having the right business model, the right strategic alliances and the right partner. The companies are working on driving sophisticated big data as a service business models based on both volumes and values. The win-win business model will be highly influenced by the number of insights business can provide to customers and value those insights can generate for their customers. 5. Defining the technology strategy—Hadoop, Spark and IBM Watson Data Platform The emergence of open source technologies gives tremendous power to organization in this new emerging data monetization space to break even more swiftly. Data provides maximum value when it is fresh. Technologies like Apache Spark and Kafka give real time analysis capabilities to business at lightning speed. This technology has a wholly different approach to data and data management than what we had before. It is the key enabler to the far reaching transformation that is really “big data.” In short, these changes all lead back to the simplest of facts in the underlying technology—the agility of data. A big data environment that supports collaboration powered by open standards is ideal. IBM Watson Data Platform provides the power of machine learning and cognitive computing based on open source “Apache Spark” to enterprises. Data platforms such as this will form solid foundation for a data monetization strategy and will enable organizations to quickly and easily monetize data.

更多热门推荐:
分享到 :
相似文章

发表评论

登录... 后才能评论