Data virtualization is the process of offering data consumers a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Consuming applications may include: business intelligence, analytics, CRM, enterprise resource planning, and more across both cloud computing platforms and on-premises.
Data Virtualization Benefits:
Data virtualization abstracts, transforms, federates and delivers data from a variety of sources and presents itself as a single access point to a consumer regardless of the physical location or nature of the various data sources. Data virtualization is based on the premise of the abstraction of data contained within a variety of data sources (databases, applications, file repositories, websites, data services vendors, etc.) for the purpose of providing a single-point access to the data and its architecture is based on a shared semantic abstraction layer as opposed to limited visibility semantic metadata confined to a single data source. Data virtualization is an enabling technology which provides the following capabilities: • Abstraction – Abstract data the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. • Virtualized Data Access – Connect to different data sources and make them accessible from one logical place • Transformation / Integration – Transform, improve quality, and integrate data based on need across multiple sources • Data Federation – Combine results sets from across multiple source systems. • Flexible Data Delivery – Publish result sets as views and/or data services executed by consuming application or users when requested In delivering these capabilities, data virtualization also addresses requirements for data security, data quality, data governance, query optimization, caching, etc. Data virtualization includes functions for development, operation and management. Smart data scientists use data virtualization to integrate data from many diverse sources - logically and virtualized for on-demand consumption by different data analytical applications. For example, data virtualization is used to address challenges such as rogue data marts, business intelligence apps, enterprise resource planning and content systems and portals.
Data virtualization is the process of offering data consumers a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Consuming applications may include: data analytics, business intelligence, CRM, enterprise resource planning, and more across both cloud computing platforms and on-premises. Data Virtualization Benefits: ● Data scientists and decision makers gain fast access to reliable information ● Improve operational efficiency - flexibility and agility of integration due to the short cycle creation of virtual data stores without the need to touch underlying sources ● Improved data quality due to a reduction in physical copies ● Improved usage through creation of subject-oriented, business-friendly data objects ● Increases revenues ● Lowers costs ● Reduces risks Data virtualization abstracts, transforms, federates and delivers data from a variety of sources and presents itself as a single access point to a consumer regardless of the physical location or nature of the various data sources. Data virtualization is based on the premise of the abstraction of data contained within a variety of data sources (databases, applications, file repositories, websites, data services vendors, etc.) for the purpose of providing a single-point access to the data and its architecture is based on a shared semantic abstraction layer as opposed to limited visibility semantic metadata confined to a single data source. Data Virtualization software is an enabling technology which provides the following capabilities: • Abstraction – Abstract data the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. • Virtualized Data Access – Connect to different data sources and make them accessible from one logical place. • Transformation / Integration – Transform, improve quality, and integrate data based on need across multiple sources. • Data Federation – Combine results sets from across multiple source systems. • Flexible Data Delivery – Publish result sets as views and/or data services executed by consuming application or users when requested. In delivering these capabilities, data virtualization also addresses requirements for data security, data quality, data governance, query optimization, caching, etc. Data virtualization software includes functions for development, operation and management. See: http://bit.ly/13Fi03G Intel CIO Kim Stevenson discusses how Intel IT is looking to leverage predictive analytics to deal with the sea of data out there, and how this is already creating new opportunities for the organization.
In one example of a successful big data implementation at Intel, Stevenson discussed a pilot program that the company ran that identified customers that were more likely to purchase than others based on the heaps of information generated at Intel. “We looked at that and we examined how our sales coverage model was against those customers, and we took our inside sales force and made calls based on what the predictive analytics said who the customers more likely to purchase were,” explained Stevenson. “In a short amount of time, we were able to cover customers that weren’t previously covered and generate millions of dollars in incremental revenue.” Another example that Stevenson gave was in retroactive analysis of a failed program that she said cost the silicon giant $700 million when the dust settled. Using the massive amounts of manufacturing data available to them during the die process, Stevenson says that they are now able to see problems sooner using big data analytics to assist in the debugging process. When asked what advice she would give to others, Stevenson said that she advises fellow CIOs to partner with their business units to identify where the hidden potential is and let that become the guiding light in terms of what problems are focused on – and then stay focused. “There’s a lot of questions you can answer about any given business, but if you stay focused on a small set of business problems, they you’ll create some early wins and you’re able to grow based on your successful track record.” Stevenson says Intel has set rules about how to operate predictive analytics in their company, which include small teams of roughly 5 people, and problems that can be solved within a 6 month period of time. Ultimately, says Stevenson, these projects are tied to ROI, which Intel has set a target of $10 million dollars for their first initial deployment. “That helps us narrow and prioritize the problems to higher value problems for the company.” Stevenson also advises that enterprises start to build the skills that they need. “You need data scientists, visualization experts, data curators, and those types of skills – they’re rare today,” she comments. “It’s harder to learn the business knowledge that is needed to make the data into valuable information than to learn the IT technical skills,” she says in advising people to grow the skills internally. “It will be a focus diligent progression of taking the people that understand your business process today and complimenting them with the technical skills required in building big data management systems or predictive analytic models.” The goal of Data Analytics (big and small) is to get actionable insights resulting in smarter decisions and better business outcomes. How you architect business technologies and design data analytics processes to get valuable, actionable insights varies.
It is critical to design and build a data warehouse / business intelligence (BI) architecture that provides a flexible, multi-faceted analytical ecosystem, optimized for efficient ingestion and analysis of large and diverse datasets. There are three types of data analysis: Predictive (forecasting) Descriptive (business intelligence and data mining) Prescriptive (optimization and simulation) Predictive Analytics Predictive analytics turns data into valuable, actionable information. Predictive analytics uses data to determine the probable future outcome of an event or a likelihood of a situation occurring. Predictive analytics encompasses a variety of statistical techniques from modeling, machine learning, data mining and game theory that analyze current and historical facts to make predictions about future events. In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions. Three basic cornerstones of predictive analytics are: Predictive modeling Decision Analysis and Optimization Transaction Profiling An example of using predictive analytics is optimizing customer relationship management systems. They can help enable an organization to analyze all customer data therefore exposing patterns that predict customer behavior. Another example is for an organization that offers multiple products, predictive analytics can help analyze customers’ spending, usage and other behavior, leading to efficient cross sales, or selling additional products to current customers. This directly leads to higher profitability per customer and stronger customer relationships. An organization must invest in a team of experts (data scientists) and create statistical algorithms for finding and accessing relevant data. The data analytics team works with business leaders to design a strategy for using predictive information. Descriptive Analytics Descriptive analytics looks at data and analyzes past events for insight as to how to approach the future. Descriptive analytics looks at past performance and understands that performance by mining historical data to look for the reasons behind past success or failure. Almost all management reporting such as sales, marketing, operations, and finance, uses this type of post-mortem analysis. Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior (such as credit risk), descriptive models identify many different relationships between customers or products. Descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Descriptive models can be used, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be utilized to develop further models that can simulate large number of individualized agents and make predictions. For example, descriptive analytics examines historical electricity usage data to help plan power needs and allow electric companies to set optimal prices. Prescriptive Analytics Prescriptive analytics automatically synthesizes big data, mathematical sciences, business rules, and machine learning to make predictions and then suggests decision options to take advantage of the predictions. Prescriptive analytics goes beyond predicting future outcomes by also suggesting actions to benefit from the predictions and showing the decision maker the implications of each decision option. Prescriptive analytics not only anticipates what will happen and when it will happen, but also why it will happen. Further, prescriptive analytics can suggest decision options on how to take advantage of a future opportunity or mitigate a future risk and illustrate the implication of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve prediction accuracy and provide better decision options. Prescriptive analytics synergistically combines data, business rules, and mathematical models. The data inputs to prescriptive analytics may come from multiple sources, internal (inside the organization) and external (social media, et al.). The data may also be structured, which includes numerical and categorical data, as well as unstructured data, such as text, images, audio, and video data, including big data. Business rules define the business process and include constraints, preferences, policies, best practices, and boundaries. Mathematical models are techniques derived from mathematical sciences and related disciplines including applied statistics, machine learning, operations research, and natural language processing. For example, prescriptive analytics can benefit healthcare strategic planning by using analytics to leverage operational and usage data combined with data of external factors such as economic data, population demographic trends and population health trends, to more accurately plan for future capital investments such as new facilities and equipment utilization as well as understand the trade-offs between adding additional beds and expanding an existing facility versus building a new one. Another example is energy and utilities. Natural gas prices fluctuate dramatically depending upon supply, demand, econometrics, geo-politics, and weather conditions. Gas producers, transmission (pipeline) companies and utility firms have a keen interest in more accurately predicting gas prices so that they can lock in favorable terms while hedging downside risk. Prescriptive analytics can accurately predict prices by modeling internal and external variables simultaneously and also provide decision options and show the impact of each decision option. Analytics on the customer supply chain provides decision makers with actionable information to run the business. Learn about the technology that enables and supports a world-class customer supply chain and the role of data virtualization. Data virtualization offers the core capabilities needed for visibility into customer information from disparate systems and agile management of complex data.
Big data is no buzzword — it's real, says Mike Gualtieri, Principal Analyst with Forrester Research. It's driving disruptive change across the economy in businesses like healthcare, retail, communications and entertainment. The potential is huge and the time to get on board is now.
Fast Data applications are typically compute intensive and run on High Performance Computing Architectures. This video explores and unlocks the need for speed in Fast Data. It examines the properties that define Fast Data, analyzes the challenges, and then takes a closer look at the role Flash technologies play in delivering the performance required by applications using Fast Data.
|
Rose TechnologyOur mission is to identify, design, customize and implement smart technologies / systems that can interact with the human race faster, cheaper and better. Archives
May 2017
Categories
All
|