Our benchmark research into business technology innovation shows that analytics ranks first or second as a business technology innovation priority in 59 percent of organizations. Businesses are moving budgets and responsibilities for analytics closer to the sales operations, often in the form of so-calledvr_Big_Data_Analytics_15_new_technologies_enhance_analytics shadow IT organizations that report into decentralized and autonomous business units rather than a central IT organization. New technologies such as in-memory systems (50%), Hadoop (42%) and data warehouse appliances (33%) are top back-end technologies being used to acquire a new generation of analytic capabilities. They are enabling new possibilities including self-service analytics, mobile access, more collaborative interaction and real-time analytics. In 2014, Ventana Research helped lead the discussion around topics such as information optimization, data preparation, big data analytics and mobile business intelligence. In 2015, we will continue to cover these topics while adding new areas of innovation as they emerge.

Three key topics lead our 2015 business analytics research agenda. The first focuses on cloud-based analytics. In our benchmark research on information optimization, nearly all (97%) organizations said it is important or very important to Ventana_Research_Benchmark_Research_Logosimplify informa­tion access for both their business and their customers. Part of the challenge in optimizing an organization’s use of information is to integrate and analyze data that originates in the cloud or has been moved there. This issue has important implications for information presentation, where analytics are executed and whether business intelligence will continue to move to the cloud in more than a piecemeal fashion. We are currently exploring these topics in our new benchmark research called analytics and data in the cloud Coupled with the issue of cloud use is the proliferation of embedded analytics and the imperative for organizations to provide scalable analytics within the workflow of applications. A key question we’ll try to answer this year is whether companies that have focused primarily on operational cloud applications at the expense of developing their analytics portfolio or those that have focused more on analytics will gain a competitive advantage.

The second research agenda item is advanced analytics. It may be useful to divide this category into machine learning and predictive analytics, which I have discussed and covered in vr_predanalytics_benefits_of_predictive_analytics_updatedour benchmark research on big data analytics. Predictive analytics has long been available in some sectors of the business world, and two-thirds (68%) of organizations as found in our research that use it said it provides a competitive advantage. Programming languages such as R, the use of Predictive Model Markup Language (PMML), inclusion of social media data in prediction, massive scale simulation, and right-time integration of scoring at the point of decision-making are all important advances in this area. Machine learning also been around for a long time, but it wasn’t until the instrumentation of big data sources and advances in technology that it made sense to use in more than academic environments. At the same time as the technology landscape is evolving, it is getting more fragmented and complex; in order to simplify it, software designers will need innovative uses of machine learning to mask the underlying complexity through layers of abstraction. A technology such as Spark out of Amp-Lab at Berkeley is still immature, but it promises to enable increasing uses of machine learning on big data. Areas such as sourcing data and preparing data for analysis must be simplified so analysts are not overwhelmed by big data.

Our third area of focus is the user experience in business intelligence tools. Simplification and optimization of information in a context-sensitive manner are paramount. An intuitive user experience can advance the people and process dimensions VR_Value_Index_Logoof business, which have lagged technology innovation according to our research in multiple areas. New approaches coming from business end-users, especially in the tech-savvy millennial generation, are pushing the envelope here. In particular, mobility and collaboration are enabling new user experiences in both business organizations and society at large. Adding to it is data collected in more forms, such as location analytics (which we have done research on), individual and societal relationships, information and popular brands. How business intelligence tools incorporate such information and make it easy to prepare, design and consume for different organizational personas is not just an agenda focus but also one focus of our 2015 Analytics and Business Intelligence Value Index to be published in the first quarter of the year.

This shapes up as an exciting year. I welcome any feedback you have on this research agenda and look forward to providing research, collaborating and educating with you in 2015.

Regards,

Tony Cosentino

VP and Research Director

Actuate, a company known for powering BIRT, the open source business intelligence technology, has been delivering large-scale consumer and industrial applications for more than 20 years. In December the company announced it would be acquired by OpenText of Ontario, Canada. OpenText is Canada’s largest software vendor with more than 8,000 employees and a portfolio of enterprise information management products. It serves VR2014_Leadership_AwardWinnerprimarily large companies. The attraction of Actuate for such a company can be seen in a number of its legacy assets as well as more current acquisitions and developments but also its existing customer base. It was also awarded a 2014 Ventana Research Business Leadership Award.

Actuate’s foundational asset is BIRT (Business Intelligence and Reporting Tools) and its developer community. With more than 3.5 million developers and 13.5 million downloads, the BIRT developer environment is used in a variety of companies on a global basis. The BIRT community includes Java developers as well as sophisticated business intelligence design professionals, which I discussed in my outline of analytics personas. BIRT is a key project for the Eclipse Foundation, an open source integrated development environment familiar to many developers. BIRT provides a graphical interface to build reports at a granular level, and being Java-based, it provides ways to grapple with data and build data connections in a virtually limitless fashion. While new programming models and scripting languages, such as Python and Ruby, are gaining favor, Java remains a primary coding language for large-scale applications. One of the critical capabilities for business intelligence tools is to provide information in a visually compelling and easily usable format. BIRT can provide pixel-perfect reporting and granular adjustments to visualization objects. This benefit is coupled with the advantage of the open source approach: availability of skilled technical human resources on a global basis at relatively low cost.

Last year Actuate introduced iHub 3.1, a deployment server that integrates data from multiple sources and distributes content to end users. IHub has connectors to most database systems including modern approaches such as Hadoop. While Actuate provides the most common connectors out of the box, BIRT and the Java framework allow any data from any system to be brought into the fold. This type of approach to big data becomes particularly compelling for the ability to vr_Big_Data_Analytics_04_types_of_big_data_for_analyticsintegrate both large-scale data and diverse data sources. The challenge is that the work sometimes requires customization, but for large-scale enterprise applications, developers often do this to deliver capabilities that would not otherwise be accessible to end users. Our benchmark research into big data analytics shows that organizations need to access many data sources for analysis including transactional data (60%), external data (50%), content (49%) and event-centric data (48%).

In 2014, Actuate introduced iHub F-Type, which enables users to build reports, visualizations and applications and deploy them in the cloud. F-Type mitigates the need to build a separate deployment infrastructure and can act as both a “sandbox” for development and a broader production environment. Using REST-based interfaces, application developers can use F-Type to prototype and scale embedded reports for their custom applications. F-Type is delivered in the cloud, has full enterprise capabilities out of the box, and is free up to a metered output capacity of 50MB. The approach uses output metering rather than input metering used by some technology vendors. This output metering approach encourages scaling of data and focuses organizations on which specific reports they should deployed to their employees and customers.

Also in 2014, Actuate introduced BIRT Analytics 5.0, a self-service discovery platform that includes advanced analytic capabilities. In my review of BIRT Analytics, I noted its vr_predanalytics_benefits_of_predictive_analytics_updatedabilities to handle large data volumes and do intuitive predictive analytics. Organizations in our research said that predictive analytics provides advantages such as achieving competitive advantage (for 68%), new revenue opportunities (55%) and increased profitability (52%). Advances in BIRT Analytics 5.0 include integration with iHub 3.1 so developers can bring self-service discovery into their dashboards and public APIs for use in custom applications.

The combination of iHub, the F-Type freemium model, BIRT Analytics and the granular controls that BIRT provides to developers and users presents a coherent strategy especially in the context of embedded applications. Actuate CEO Pete Cittadini asserts that the company has the most APIs of any business intelligence vendor. The position is a good one especially since embedded technology is becoming important in the context of custom applications and in the so-called Internet-of-Things. The ability to make a call into another application instead of custom-coding the function itself within the workflow of an end-user application cuts developer time significantly. Furthermore, the robustness of the Actuate platform enables applications to scale almost without limit.

OpenText and Actuate have similarities, such as the maturity of the organizations and the types of large clients they vr_Info_Optimization_02_drivers_for_deploying_informationservice. It will be interesting to see how Actuate’s API strategy will impact the next generation of OpenText’s analytic applications and to what degree Actuate remains an independent business unit in marketing to customers. As a company that has been built through acquisitions, OpenText has a mature onboarding process that usually keeps the new business unit operating separately. OpenText CEO Mark Barrenechea outlines his perspective on the acquisition which will bolster its portfolio for information optimization and analytics or what it calls enterprise information management. In fact our benchmark research on information optimization finds that analytics is the top driver for deploying information in two thirds of organizations. The difference this time may be that today’s enterprises are asking for more integrated information which embeds analytics rather than having different interfaces for each of the applications or tools. The acquisition of Actuate by OpenText has now closed and now changes will occur to Actuate that should be watched closely to determine its path forward and it potential higher value for customers within OpenText.

Regards,

Tony Cosentino

VP & Research Director

In 2014, IBM announced Watson Analytics, which uses machine learning and natural language processing to unify and simplify the user experience in each step of the analytic processing: data acquisition, data preparation, analysis, dashboarding and storytelling.  After a relatively short beta testing period involving more than 22,000 users, IBM released Watson Analytics for general availability in December. There are two editions: the “freemium” trial version allows 500MB of data storage and access to file sizes less than 100,000 rows of data and 50 columns; the personal edition is a monthly subscription that enables larger files and more storage.

Its initial release includes functions to explore, predict and assemble data. Many of the features are based on IBM’s SPSS Analytic Catalyst, which I wrote about and which won the 2013 Ventana Research Technology Innovation Award for business analytics. Once data is uploaded, the explore function enables users to analyze data in an iterative fashion using natural language processing and simple point-and-click actions. Algorithms decide the best fit for graphics based on the data, but users may choose other graphics as needed. An “insight bar” shows other relevant data that may contain insights such as potential market opportunities.

The ability to explore data through visualizations with minimal knowledge is a primary aim of modern analytics tools. With the explore function incorporating natural language processing, which other tools in the market lack, IBM makes analytics accessible to users without the need to drag and drop dimensions and measures across the screen. This feature should not be underestimated; usability is the buying criterion for analytics tools most widely cited in our benchmark research on next-generation business intelligence (by 63% of organizations).

vr_ngbi_br_importance_of_bi_technology_considerations_updatedThe predict capability of Watson Analytics focuses on driver analysis, which is useful in a variety of circumstances such as sales win and loss, market lift analysis, operations and churn analysis. In its simplest form, a driver analysis aims to understand causes and effects among multiple variables. This is a complex process that most organizations leave to their resident statistician or outsource to a professional analyst. By examining the underlying data characteristics, the predict function can address data sets, including what may be considered big data, with an appropriate algorithm. The benefit for nontechnical users is that Watson Analytics makes the decision on selecting the algorithm and presents results in a relatively nontechnical manner such as spiral diagrams or tree diagrams. Having absorbed the top-level information, users can drill down into top key drivers. This ability enables users to see relative attribute influences and interactivity between attributes. Understanding interactivity is an important part of driver analysis since causal variables often move together (a challenge known as multicollinearity) and it is sometimes hard to distinguish what is actually causing a particular outcome. For instance, analysis may blame the customer service department for a product defect and point to it as the primary driver of customer defection. Accepting this result, a company may mistakenly try to fix customer service when a product issue needs to be addressed. This approach also overcomes the challenge of Simpson’s paradox, in which a trend that appears in different groups of data disappears or reverses when these groups are combined. This is a hindrance for some visualization tools in the market.

Once users have analyzed the data sufficiently and want to create and share their analysis, the assemble function enables them to bring together various dashboard visualizations in a single screen. Currently, Watson Analytics does such sharing (as well as comments related to the visualizations) via email. In the future, it would good to see capabilities such as annotation and cloud-based sharing in the product.

Full data preparation capabilities are not yet integrated into Watson Analytics. Currently, it includes a data quality report that gives confidence levels for the current data based on its cleanliness, and basic sort, transform and relabeling are incorporated as well. I assume that IBM has much more in the works here. For instance, its DataWorks cloud service offers APIs for some of the best data preparation and master data management available today. DataWorks can mask data at the source and do probabilistic matching against many sources including both cloud and on-premises addresses.  This is a major challenge organizations face when needing to conduct analytics across many data sets. For instance, in multichannel marketing, each individual customer may have many email addresses as well as different mailing addresses, phone numbers and identifiers for social media. A so-called “golden record” needs to be created so all such information can be linked together. Conceptually, the data becomes one long row of data related to that golden record, rather than multiple unassociated data in rows of shorter length. This data needs to be brought into a company’s own internal systems, and personally identifiable information must be stripped out before anything moves into a public domain. In a probabilistic matching system, data is matched not on one field but through associations of data which gives levels of certainty that records should be merged. This is different than past approaches and one of the reasons for significant innovation in the category. Multiple startups have been entering the data preparation space to address the need for a better user experience in data preparation. Such needs have been documented as one of the foundational issues facing the world of big data. Our benchmark research into information optimization shows that data preparation (47%) and quality and consistency (45%) are the most time-consuming tasks for organizations in analytics.

Watson Analytics is deployed on IBM’s SoftLayer cloud vr_Info_Optimization_04_basic_information_tasks_consume_timetechnology and is part of a push to move its analytic portfolio into the cloud. Early in 2015 the company plans to move its SPSS and Cognos products into the cloud via a managed service, thus offloading tasks such as setup, maintenance and disaster recovery management. Watson Analytics will be offered as a set of APIs much as the broader Watson cognitive computing platform has been. Last year, IBM said it would move almost all of its software portfolio to the cloud via its Bluemix service platform. These cloud efforts, coupled with the company’s substantial investment in partner programs with developers and universities around the world, suggest that Watson may power many next-generation cognitive computing applications, a market estimated to grow into the tens of billions of dollars in the next several years.

Overall, I expect Watson Analytics to gain more attention and adoption in 2015 and beyond. Its design philosophy and user experience are innovative, but work must be done in some areas to make it a tool that professionals use in their daily work. Given the resources IBM is putting into the product and the massive amounts of product feedback it is receiving, I expect initial release issues to be worked out quickly through the continuous release cycle. Once they are, Watson Analytics will raise the bar on self-service analytics.

Regards,

Tony Cosentino

VP and Research Director

PentahoWorld, the first user conference for this 10-year-old supplier of data integration and business intelligence that provides business analytics, attracted more than 400 customers in roles ranging from IT and database professionals to business analysts and end users. The diversity of the crowd reflects Pentaho’s broad portfolio of products. It covers the integration aspects of big data analytics with the Pentaho Data Integration tools and the front-end tools and visualization with the Pentaho Business Analytics. In essence its portfolio provides end-to-end data to analytics through what they introduced as Big Data Orchestration that brings governed data delivery and streamlined data refinery together on one platform.

vr_BDI_03_plans_for_big_data_technologyPentaho has made progress in business over the past year, picking up Fortune 1000 clients and moving from providing analytics to midsize companies to serving more major companies such as Halliburton, Lufthansa and NASDAQ. One reason for this success is Pentaho’s ability to integrate large scale data from multiple sources including enterprise data warehouses, Hadoop and other NoSQL approaches. Our research into big data integration shows that Hadoop is a key technology that 44 percent of organizations are likely to use, but it is just one option in the enterprise data environment. A second key for Pentaho has been the embeddable nature of its approach, which enables companies, especially those selling cloud-based software as a service (SaaS), to use analytics to gain competitive advantage by placing its tools within their applications. For more detail on Pentaho’s analytics and business intelligence tools please my previous analytic perspective.

A key advance for the company over the past year has been the development and refinement of what the company calls big data blueprints. These are general use cases in such areas as ETL offloading and customer analytics. Each approach includes design patterns for ETL and analytics that work with high-performance analytic databases including NoSQL variants such as Mongo and Cassandra.

The blueprint concept is important for several reasons. First, it helps Pentaho focus on specific market needs. Second, it shows customers and partners processes that enable them to get immediate return on the technology investment. The same research referenced above shows that organizations manage their information and technology better than their people and processes; to realize full value from spending on new technology, they need to pay more attention to how the technology fits with these cultural aspects.

vr_Info_Optimization_09_most_important_end_user_capabilitiesAt the user conference, the company announced release 5.2 of its core business analytics products and featured its Governed Data Delivery concept and Streamlined Data Refinery. The Streamlined Data Refinery provides a process for business analysts to access the already integrated data provided through PDI and create data models on the fly. The advantage is that this is not a technical task and the business analyst does not have to understand the underlying metadata or the data structures. The user chooses the dimensions of the analysis using menus that offer multiple combinations to be chosen in an ad hoc manner. Then the Streamlined Data Refinery automatically generates a data cube that is available for fast querying of an analytic database. Currently, Pentaho supports only the HP Vertica database, but its roadmap promises to add high-performance databases from other suppliers. The entire process can take only a few minutes and provides a much more flexible and dynamic process than asking IT to rebuild a data model every time a new question is asked.

While Pentaho Data Integration enables users to bring together all available data and integrate it to find new insights, Streamlined Data Refinery gives business users direct access to the blended data. In this way they can explore data dynamically without involving IT. The other important aspect is that it easily provides the lineage of the data. Internal or external auditors often need to understand the nature of the data and the integration, which data lineage supports. Such a feature should benefit all types of businesses but especially those in regulated industries. This approach addresses the two top needs of business end users, which according to our benchmark research into information optimization, are to drill into data (for 37%) and search for specific information (36%).

Another advance is Pentaho 5.2’s support for Kerberos security on Cloudera, Hortonworks and MapR. Cloudera, currently the largest Hadoop distribution, and Hortonworks, which is planning to raise capital via a public offering, hold the lion’s share of the commercial Hadoop market. Kerberos puts a layer of authentication security between the Pentaho Data Integration tool and the Hadoop data. This helps address security concerns which have dramatically increased over the past year after major breaches at retailers, banks and government institutions.

These announcements show results of Pentaho’s enterprise-centric customer strategy as well as the company’s investment in senior leadership. Christopher Dziekan, the new chief product officer, presented a three-year roadmap that focuses on data access, governance and data integration. It is good to see the company put its stake in the ground with a well-formed vision of the big data market. Given the speed at which the market is changing and the necessity for Pentaho to consider the needs of its open source community, it will be interesting to see how the company adjusts the roadmap going forward.

For enterprises grappling with big data integration and trying to give business users access to new information sources, Pentaho’s Streamlined Data Refinery deserves a look. For both enterprises and ISVs that want to apply integration and analytics in context of another application, Pentaho’s REST-based APIs allow embedding of end-to-end analytic capabilities. Together with the big data blue prints discussed above, Pentaho is able to deliver a targeted yet flexible approach to big data.

Regards,

Tony Cosentino

VP and Research Director

vr_oi_information_sources_for_operational_intelligenceAt a conference of more than 3,500 users, Splunk executives showed off their company’s latest tools. Splunk makes software for discovering, monitoring and analyzing machine data, which is often considered data exhaust since it is a by-product of computing processes and applications. But machine data is essential to a smoothly running technology infrastructure that supports business process. One advantage is that because machine data not recorded by end users, it is less subject to input error. Splunk has grown rapidly by solving fundamental problems associated with the complexities of information technology and challenging assumptions in IT systems and network management that is rapidly being referred to as big data analytics. The two main and related assumptions it challenges are that different types of IT systems should be managed separately and that data should be modeled prior to recording it. Clint Sharp, Splunk’s director of product marketing, pointed out that network and system data can come from several sources and argued that utilizing point solution tools and a “model first” approach does not work when it has to deal with big data and a question-and-answer paradigm. Our research into Operational Intelligence finds that IT systems are most important information source in almost two thirds (62%) of organizations. Splunk used the conference to show how it has brought to these data management innovations the business trends of mobility, cloud deployment and security.

Presenters from major customer companies demonstrated how they work with Splunk Enterprise. For example, according to Michael Connor, senior platform architect for Coca-Cola, bringing all the company’s data into Splunk allowed the IT department to reduce trouble tickets by 80 percent and operating costs by 40 percent. Beyond asserting the core value of streamlining IT operations and the ability to quickly provision system resources, Connor discussed other uses for data derived from the Splunk product. Coca-Cola IT used a free community add-on to deliver easy-to-use dashboards for the security team. He showed how channel managers compare different vending environments in ways they had never done before. They also can conduct online ethnographic studies to better understand behavior patterns and serve different groups. For Coca-Cola, the key to success for the application was to bring data from various platforms in the organization into one data platform. This challenge, he said, is more to do with people and processes than technology, since many parts of an organization are protective of their data, in effect forming what he called “data cartels.” This situation is not uncommon. Our research into information optimization shows that organizations need these so-called softer disciplines to catch up with their capabilities in technology and information to realize full value from data and analytics initiatives.

In keeping up with trends, Splunk is making advances in mobility. One is MINT for monitoring mobile devices. With the company’s acquisition of BugSense as a foundation, Splunk has built an extension of its core platform that consumes and indexes application and other machine data from mobile devices. The company is offering the MINT Express version to developers so they can build the operational service into their applications. Similar to the core product, MINT has the ability to track transactions, network latency and crashes throughout the IT stack. It can help application developers quickly solve user experience issues by understanding root causes and determining responsibility. For instance, MINT Express can answer questions such as these: Is it an application issue or a carrier issue? Is it a bad feature or a system problem? After it is applied, end-user customers get a better digital experience which results in more time spent with the application and increased customer loyalty in a mobile environment where the cost of switching is low. Splunk also offers MINT Enterprise, which allows users to link and cross-reference data in Splunk Enterprise. The ability to instrument data in a mobile environment, draw a relationship with the enterprise data  and display key operational variables is critical to serving and satisfying consumers. By extending this capability into the mobile sphere, Splunk MINT delivers value for corporate IT operations as well as the new breed of cloud software providers. However, Splunk risks stepping on its partners’ toes as it takes advantage of certain opportunities as in mobility. In my estimation, the risk is worth taking given that mobility is a systemic change that represents enormous opportunity. Our research into business technology innovation shows mobility in a virtual tie with collaboration for the second-most important innovation priority for companies today.

vr_Info_Optimization_17_with_more_sources_easy_access_more_importantCloud computing is another major shift that the company is prioritizing. Praveen Rangnath, director of cloud product marketing, said that Splunk Cloud enables the company to deliver 100 percent on service level agreements through fail-over capabilities across AWS availability zones, redundant operations across indexers and search heads, and by using Splunk on Splunk itself. Perhaps the most important capability of the cloud product is its integration of enterprise and on-demand systems. This capability allows a single view and queries across multiple data sources no matter where they physically reside. Coupled with Splunk’s abilities to ingest data from various NoSQL systems – such as Mongo, Cassandra, Accumulo, Amazon’s Elastic Map Reduce, Amazon S3 and even mainframes – with the Ironstream crawler, its hybrid search capability is unique. The company’s significant push into the cloud is reflected by both a 33 percent reduction in price and its continued investment into the platform. According to our research into information optimization one of the biggest challenges with big data is simplification of data access; as data sources increase easy access becomes more important. More than 92 percent of organizations that have  16 to 20 data sources rated information simplification very important. As data proliferates both on-premises and in the cloud, Splunk’s software abstracts users from the technical complexities of integrating and accessing the hybrid environment. (Exploring this and related issues, our upcoming benchmark research into data and analytics in the cloud will examine trends in business intelligence and analytics related to cloud computing.)

vr_ngbi_br_importance_of_bi_technology_considerations_updatedUsability is another key consideration: In our research on next-generation business intelligence nearly two-thirds (63%) of organizations said that is an important evaluation criterion, more than any other one. At the user conference Divanny Lamas, senior manager of product management, discussed new features aimed at the less sophisticated Splunk user. Advanced Feature Extractor enables users to extract fields in a streamlined fashion that does not require them to write an expression. Instant Pivot enables easy access to a library of panels and dashboards that allows end users to pivot and visually explore data. Event Pattern Detection clusters patterns in the data to make different product usage metrics and issues impacting downtime easier to resolve. Each of these advances represents progress in broader usability and organizational appeal. While Splunk continues to make its data accessible to business users, gaining broader adoption is still an uphill battle because much of the Splunk data is technical in nature. The current capabilities address the technologically sophisticated knowledge worker or the data analyst, while a library of plug-ins allows more line-of-business end-users to perform visualization. (For more on the analytic user personas that matter in the organization and what they need to be successful, please see my analysis.)

Splunk is building an impressive platform for collecting and analyzing data across the organization. The question from the business analytics perspective is whether the data can be modeled in ways that easily represent each organization’s unique business challenges. Splunk provides search capabilities for IT data by default, but when other data sources need to be brought in for more advanced reporting and correlation, it requires the data to be normalized, categorized and parsed. Currently, business users apply various data models and frameworks from major IT vendors as well as various agencies and data brokers. This dispersion could provide an opportunity for Splunk to provide a unified platform; the more data businesses ingest, the more likely they will rely on such a platform. Splunk’s Common Information Model provides a metadata framework using key-value pair representation similar to what other providers of cloud analytic applications are doing. When we consider the programmable nature of the platform including RESTful APIs and various SDKs, HUNK’s streamlined access to Hadoop and other NoSQL sources, Splunk BD connect for relational sources, the Splunk Cloud hybrid access model and the instrumentation of mobile data in MINT, the expansive platform idea seems plausible.

vr_Big_Data_Analytics_04_types_of_big_data_for_analyticsA complicating factor as to whether Splunk will become such a platform for operational intelligence and big data analytics is the Internet of Things (IoT), which collects data from various devices. Massive amounts of sensor data already are moving through the Internet, but IoT approaches and service architectures are works in progress. Currently, many of these architectures do not communicate with others. Given Splunk’s focus on machine data which is a key type of input for big data analytics in 42 percent of organizations according to our research, IoT appears to be a natural fit as it is generating event-centered data which is a type of input for big data analytics in 48 percent of organizations. There is some debate about whether Splunk is a true event processing engine, but that depends on how the category is defined. Log messages, its specialty, are not events per se but rather are data related to something that has happened in an IT infrastructure. Once correlated, this data points directly to something of significance, including events that can be acted upon. If such a correlation triggers a system action, and that action is taken in time to solve the problem, then the data provides value and it should not matter if the system is acting in real time or near real time. In this way, the data itself is Splunk’s advantage. To be successful in becoming a broader data platform, the company will need to advance its Common Information Model, continue to emphasize the unique value of machine data, build their developer and partner ecosystem, and encourage customers to push the envelope and develop new use cases.

For organizations considering Splunk for the first time, IT operations, developer operations, security, fraud management and compliance management are obvious areas to evaluate. Splunk’s core value is that it simplifies administration, reduces IT costs and can reduce risk through pattern recognition and anomaly detection. Each of these areas can deliver value immediately. For those with a current Splunk implementation, we suggest examining use cases related to business analytics. Specifically, comparative analysis and analysis of root causes, online ethnography and feature optimization in the context of the user experience can all deliver value. As ever more data comes into their systems, companies also may find it reasonable to consider Splunk in other new ways like big data analytics and operational intelligence.

Regards,

Tony Cosentino

VP and Research Director

Qlik was an early pioneer in developing a substantial market for a visual discovery tool that enables end users to easily access and manipulate analytics and data. Its QlikView application uses an associative experience that takes  an in-memory, correlation-based approach to present a simpler design and user experience for analytics than previous tools. Driven by sales of QlikView, the company’s revenue has grown to more than $.5 billion, and originating in Sweden it has a global presence.

At its annual analyst event in New York the business intelligence and analytics vendor discussed recent product developments, in particular the release of Qlik Sense. It is a drag-and-drop visual analytics tool targeted at business users but scalable enough for enterprise use. Its aim is to give business users a simplified visual analytic experience that takes advantage of modern cloud technologies. Such a user experience is important; our benchmark research into next-generation business intelligence shows that usability is an important buying criterion for nearly two out of three (63%) companies. A couple of months ago, Qlik introduced Qlik Sense for desktop systems, and at the analyst event it announced general availability of the cloud and server editions.

vr_bti_br_technology_innovation_prioritiesAccording to our research into business technology innovation, analytics is the top initiative for new technology: 39 percent of organizations ranked it their number-one priority. Analytics includes exploratory and confirmatory approaches to analysis. Ventana Research refers to exploratory analytics as analytic discovery and segments it into four categories that my colleague Mark Smith has articulated. Qlik’s products belong in the analytic discovery category. Users can use the tool to investigate data sets in an intuitive and visual manner, often conducting root cause analysis and decision support functions. This software market is relatively young, and competing companies are evolving and redesigning their products to suit changing tastes. Tableau, one of Qlik’s primary competitors, which I wrote about recently, is adapting its current platform to developments in hardware and in-memory processing, focusing on usability and opening up its APIs. Others have recently made their first moves into the market for visual discovery applications, including Information Builders and MicroStrategy. Companies such as Actuate, IBM, SAP, SAS and Tibco are focused on incorporating more advanced analytics in their discovery tools. For buyers, this competitive and fragmented market creates a challenge when comparing offers in the analytic discovery market.

A key differentiator is Qlik Sense’s new modern architecture, which is designed for cloud-based deployment and embedding in other applications for specialized use. Its analytic engine plugs into a range of Web services. For instance, the Qlik Sense API enables the analytic engine to call to a data set on the fly and allow the application to manipulate data in the context of a business process. An entire table can be delivered to node.js, which extends the JavaScript API to offer server-side features and enables the Qlik Sense engine to take on an almost unlimited number of real-time connections  by not blocking input and output. Previously developers could write PHP script and pipe SQL to get the data, and the resulting application is viable but complex to build and maintain. Now all they need is JavaScript and HTML. The Qlik Sense architecture abstracts the complexity and allows JavaScript developers to make use of complex constructs without intricate knowledge of the database. The new architecture can decouple the Qlik engine from the visualizations themselves, so Web developers can define expressions and dimensions without going into the complexities of the server-side architecture. Furthermore, by decoupling the services, developers gain access to open source visualization technologies such as d3.js. Cloud-based business intelligence and extensible analytics are becoming a hot topic. I have written about this, including a glimpse of our newly announced benchmark research on the next generation of data and analytics in the cloud. From a business user perspective, these types of architectural changes may not mean much, but for developers, OEMs and UX design teams, it allows much faster time to value through a simpler component-based approach to utilizing the Qlik analytic engine and building visualizations.

vr_Big_Data_Analytics_06_benefits_realized_from_big_data_analyticsThe modern architecture of Qlik Sense together with the company’s ecosystem of more than 1,000 partners and a professional services organization that has completed more than 2,700 consulting engagements, gives Qlik a competitive position. The service partner relationships, including those with major systems integrators, are key to the company’s future since analytics is as much about change management as technology. Our research in analytics consistently shows that people and processes lag technology and information in performance with analytics. Furthermore, in our benchmark research into big data analytics, the benefits most often mentioned as achieved are better communication and knowledge sharing (24%), better management and alignment of business goals (18%), and gaining competitive advantage (17%).

As tested on my desktop, Qlik Sense shows an intuitive interface with drag-and-drop capabilities for building analysis. Formulas are easy to incorporate as new measures, and the palate offers a variety of visualization options which automatically fit to the screen. The integration with QlikView is straightforward in that a data model from QlikView can be saved seamlessly and opened intact in Qlik Sense. The storyboard function allows for multiple visualizations to build into narratives and for annotations to be added including linkages with data. For instance, annotations can be added to specific inflection points in a trend line or outliers that may need explanation. Since the approach is all HTML5-based, the visualizations are ready for deployment to mobile devices and responsive to various screen sizes including newer smartphones, tablets and the new class of so-called phablets. In the evaluation of vendors in our Mobile Business Intelligence Value Index Qlik ranked fourth overall.

In the software business, of course, technology advances alone don’t guarantee success. Qlik has struggled to clarify the position its next-generation product and it is not a replacement for QlikView. QlikView users are passionate about keeping their existing tool because they have already designed dashboards and calculations using this tool. Vendors should not underestimate user loyalty and adoption. Therefore Qlik now promises to support both products for as long as the market continues to demand them. The majority of R&D investment will go into Qlik Sense as developers focus on surpassing the capabilities of QlikView. For now, the company will follow a bifurcated strategy in which the tools work together to meet needs for various organizational personas. To me, this is the right strategy. There is no issue in being a two-product company, and the revised positioning of Qlik Sense complements QlikView both on the self-service side and the developer side. Qlik Sense is not yet as mature a product as QlikView, but from a business user’s perspective it is a simple and effective analysis tool for exploring data and building different data views. It is simpler because users no do not need to script the data in order to create the specific views they deem necessary. As the product matures, I expect it to become more than an end user’s visual analysis tool since the capabilities of Qlik Sense lends itself to web scale approaches. Over time, it will be interesting to see how the company harmonizes the two products and how quickly customers will adopt Qlik Sense as a stand-alone tool.

For companies already using QlikView, Qlik Sense is an important addition to the portfolio. It will allow business users to become more engaged in exploring data and sharing ideas. Even for those not using QlikView, with its modern architecture and open approach to analytics, Qlik Sense can help future-proof an organization’s current business intelligence architecture. For those considering Qlik for the first time, the choice may be whether to bring in one or both products. Given the proven approach of QlikView, in the near term a combination approach may be a better solution in some organizations. Partners, content providers and ISVs should consider Qlik Branch, which provides resources for embedding Qlik Sense directly into applications. The site provides developer tools, community efforts such as d3.js integrations and synchronization with Github for sharing and branching of designs. For every class of user, Qlik Sense can be downloaded for free and tested directly on the desktop. Qlik has made significant strides with Qlik Sense, and it is worth a look for anybody interested in the cutting edge of analytics and business intelligence.

Regards,

Tony Cosentino

VP and Research Director

Tableau Software introduced its latest advancements in analytics and business intelligence software along with its future plan to more than 5,000 attendees at its annual user conference in its home town of Seattle. The enthusiasm of the primarily millennial-age crowd reflected not only the success of the young company but also its aspirations. The market for what Ventana Research calls visual and data discovery and Tableau have experienced rapid growth that is likely to continue.

vr_ngbi_br_importance_of_bi_technology_considerations_updatedThe company focuses on the mission of usability, which our benchmark research into next-generation business intelligence shows to be a top software buying criterion for more organizations (63%) than any other. Tableau introduced advances in this area including analytic ease of use, APIs, data preparation, storyboarding and mobile technology support as part of its user-centric product strategy. Without revealing specific timelines, executives said that the next major release, Tableau 9.0, likely will be available in the first half of 2015 as outlined by the CEO in his keynote.

Chief Development Officer and co-founder Chris Stolte showed upcoming ease-of-use features such as the addition of Excel-like functionality within workbooks. Users can type a formula directly into a field and use auto-completion or drag and drop to bring in other fields that are components of the calculation. The new calculation can be saved as a metric and easily added to the Tableau data model. Others announced included table calculations, geographic search capabilities and radial and lasso selection on maps. The live demonstration between users onstage was seamless and created flows that the audience could understand. The demonstration reflected impressive navigation capabilities.

Stolte also demonstrated upcoming statistical capabilities.  Box plots have been available since Tableau 8.1, but now the capabilities have been extended for comparative analysis across groups and to create basic forecast models. The comparative descriptive analytics has been improved with drill-down features and calculations within tables. This is important since analysis between and within groups is necessary to use descriptive statistics to reveal business insights. Our research into big data analytics shows that the some of the most important analytic approaches are descriptive in nature: Pivot tables (48%), classification or decision trees (39%) and clustering (37%) are the methods most widely used for big data analytics.

When it comes to predictive analytics, however, Tableau is still somewhat limited. Companies such as IBM, Information Builders, MicroStrategy, SAS and SAP have focused more resources on incorporating advanced analytics in their discovery tools; Tableau has to catch up in this area. Forecasting of basic trend lines is a first step, but if the tool is meant for model builders, then I’d like to see more families of curves and algorithms to fit different data sets such as seasonal variations. Business users, Tableau’s core audience, need automated modeling approaches that can evaluate the characteristics of the data and produce adequate models. How different stakeholders communicate around the statistical parameters and models is also unclear to me. Our research shows that summary statistics and model comparisons are important capabilities for administering and sharing predictive analytics. Overall, Tableau is making strides in both descriptive and predictive statistics and making this intuitive for users.

vr_Info_Optimization_04_basic_information_tasks_consume_timePresenters also introduced new data preparation capabilities on Excel imports including the abilities to view delimiters, split columns and even un-pivot data. The software also has some ability to clean up spreadsheets such as getting rid of headers and footers. Truly dirty data, such as survey data captured in spreadsheets or created with custom calculations and nesting, is not the target here. The data preparation capabilities can’t compare with those provided by companies such as Alteryx, Infromatica, Paxata, Pentaho, Tamr or Trifacta. However, it is useful to quickly upload and clean a basic Excel document and then visualize it in a single application. According to our benchmark research on information optimization, data preparation (47%) and checking data for quality and consistency (45%) are the primary tasks on which analysts spend their time.

Storytelling (which Tableau calls Storypoints), is an exciting area of development for the company. Introduced last year, it enables users to build a sequential narrative that includes graphics and text. New developments enable the user to view thumbnails of different visualizations and easily pull them into the story. Better control over calculations, fonts, colors and text positioning were also introduced. While these changes may seem minor, they are important to this kind of application. A major reason that most analysts take their data out of an analytic tool and put it into PowerPoint is to have this type of control and ease of use. While PowerPoint remains dominant when it comes to communicating analytic results in business, a new category of tools is challenging Microsoft’s preeminence in this area. Tableau Storypoints is one of the easiest to use in the market.

API advancements were discussed by Francois Ajenstat, senior director of product management, who suggested that in the future anything done on Tableau Server can be done through APIs. In this way different capabilities will be exposed so that other software can use them (Tableau visualizations, for example) within the context of their workflows. As well, Tableau has added REST APIs including JavaScript capabilities, which allow users to embed Tableau in applications to do such things as customize dashboards. The ability to visualize JSON data in Tableau is also a step forward since exploiting new data sources is the fastest way to gain business advantage from analytics. This capability was demonstrated using government data, which is commonly packaged in JSON format. As Tableau continues its API initiatives, we hope to see more advances in exposing APIs so Tableau can be integrated into governance workflows, which can be critical to enterprise implementations. APIs also can enable analytic workflow tools to more easily access the product so statistical modelers can understand the data prior to building models. While Tableau integrates on the back end with analytic tools such as Alteryx, the visual and data discovery aspects that must precede statistical model building are still a challenge. Having more open APIs will open up opportunities for Tableau’s customers and could broaden its partner ecosystem.

The company made other enterprise announcements such as Kerberos Security, easier content management, an ability to seamlessly integrate with Salesforce.com, and a parallelized approach to accessing very large data sets. These are all significant developments. As Tableau advances its enterprise vision and continues to expand on the edges of the organization, I expect it to compete in more enterprise deals. The challenge the company faces is still one of the enterprise data model. Short of displacing the enterprise data models that companies have invested in over the years, it will continue to be an uphill battle for Tableau to displace large incumbent BI vendors. Our research into Information Optimization shows that the integration with security and user-access frameworks is the biggest technical challenge for optimizing information. For a deeper look at Tableau’s battle for the enterprise, please read my previous analysis.

VRMobileBIVIPerhaps the most excitement from the audience came from the introduction of Project Elastic, a new mobile application with which users can automatically turn an email attachment in Excel into a visualization. The capability is native so it works in offline mode and provides a fast and responsive experience. The new direction bifurcates Tableau’s mobile strategy which heretofore was a purely HTML5 strategy introduced with Tableau 6.1. Tableau ranked seventh in our 2014 Mobile Business Intelligence Value Index.

Tableau has a keen vision of how it can carve out a place in the analytics market. Usability has a major role in building a following among users that should help it continue to grow. The Tableau cool factor won’t go unchallenged, however. Major players are introducing visual discovery products amid discussion about the need for more governance of data into the enterprise and cloud computing; Tableau likely have to blend into the fabric of analytics and BI in organizations. To do so, the company will need to forge significant partnerships and open its platform for easy access.

Organizations considering visual discovery software in the context of business and IT should include Tableau. For enterprise implementations, consideration should be done to ensure Tableau can support the broader manageability and reliability requirements for larger scale deployments. Visualization of data continues to be a critical method to understand the challenges of global business but should not be the only analytic approach taken for all types of users. Tableau is on the leading edge of visual discovery and should not be overlooked.

Regards,

Tony Cosentino

VP and Research Director

It’s widely agreed that cloud computing is a major technology innovation. Many companies use cloud-based systems for specific business functions such as customer service, sales, marketing, finance and human resources. More generally, however, analytics and business intelligence (BI) have not migrated to the cloud as quickly. But now cloud-based data and analytics products are becoming more common. This trend is most popular among technology companies, small and midsize businesses, and departments in larger ones, but there are examples of large companies moving their entire BI environments to the cloud. Our research into big data analytics shows that more than one-fourth of analytics initiatives for companies of all sizes are cloud-based.

vr_bti_br_top_benefits_of_cloud_computingLike other cloud-based applications, cloud analytics offers enhanced scalability and flexibility, affordability and IT staff optimization. Our research shows that in general the top benefits are lowered costs (for 40%), improved efficiency (39%) and better communication and knowledge sharing (34%). Using the cloud, organizations can use a sophisticated IT infrastructure without having to dedicate staff to install and support it. There is no need for comprehensive development and testing because the provider is responsible for maintaining and upgrading the application and the infrastructure. The cloud can also provide flexible infrastructure resources to support “sandbox” testing environments for advanced analytics deployments. Multitenant cloud deployments are more affordable because costs are shared across many companies. When used departmentally, application costs need not be capitalized but instead can be made operational expenditures. Capabilities can be put to use quickly, as vendors develop them, and updates need not disrupt use. Finally, some cloud-based interfaces are more intuitive for end users since they have been designed with the user experience in mind. Regarding cloud technology, our business technology innovation research finds that usability is the most important technology evaluation criterion (for 64% of participants), followed by reliability (54%) and capability (%).

vr_bti_why_companies_dont_use_cloudFor analytics and BI specifically, there are still issues holding back adoption. Our research finds that a primary reason companies do not deploy cloud-based applications of any sort are security and compliance issues. For analytics and business intelligence, we can also include data related activities as another reason since cloud-based approaches often require data integration and transmission of sensitive data across an external network along with a range of data preparation. Such issues are especially prevalent for companies that have legacy BI tools using data models that have been distributed across their divisions. Often these organizations have defined their business logic and metrics calculations within the context of these tools. Furthermore, these tools may be integrated with other core applications such as forecasting and planning. To re-architect such data models and metrics calculations is a challenge some companies are reluctant to undertake.

In addition, despite widespread use of some types of cloud-based systems, for nontechnical business people discussions of business intelligence in the cloud can be confusing, especially when they involve information integration, the types of analytics to be performed and where the analytic processes will. The first generation of cloud applications focused on end-user processes related to the various lines of business and largely ignored the complexities inherent in information integration and analytics. Organizations can no longer ignore these complexities since doing so exacerbates the challenge of fragmented systems and distributed data. Buyers and architects should understand the benefits of analytics in the cloud and weigh these benefits against the challenges described above.

Our upcoming benchmark research into data and analytics in the cloud will examine the current maturity of this market as well opportunities and barriers to organizational adoption across line of business and IT. It will evaluate cloud-based analytics in the context of trends such as big data, mobile technology and social collaboration as well as location intelligence and predictive analytics. It will consider how cloud computing enables these and other applications and identify leading indicators for adoption of cloud-based analytics. It also will examine how cloud deployment enables large-scale and streaming applications. For example, it will examine real-time processing of vast amounts of data from sensors and other semistructured data (often referred to as the Internet of Things).

It is an exciting time to be studying this particular market as companies consider moving platforms to the cloud. I look forward to receiving any qualified feedback as we move forward to start this important benchmark research. Please get in touch if you have an interest in this area of our research.

Regards,

Tony Cosentino

VP and Research Director

It’s widely agreed that cloud computing is a major technology innovation. Many companies use cloud-based systems for specific business functions such as customer service, sales, marketing, finance and human resources. More generally, however, analytics and business intelligence (BI) have not migrated to the cloud as quickly. But now cloud-based data and analytics products are becoming more common. This trend is most popular among technology companies, small and midsize businesses, and departments in larger ones, but there are examples of large companies moving their entire BI environments to the cloud. Our research into big data analytics shows that more than one-fourth of analytics initiatives for companies of all sizes are cloud-based.

vr_bti_br_top_benefits_of_cloud_computingLike other cloud-based applications, cloud analytics offers enhanced scalability and flexibility, affordability and IT staff optimization. Our research shows that in general the top benefits are lowered costs (for 40%), improved efficiency (39%) and better communication and knowledge sharing (34%). Using the cloud, organizations can use a sophisticated IT infrastructure without having to dedicate staff to install and support it. There is no need for comprehensive development and testing because the provider is responsible for maintaining and upgrading the application and the infrastructure. The cloud can also provide flexible infrastructure resources to support “sandbox” testing environments for advanced analytics deployments. Multitenant cloud deployments are more affordable because costs are shared across many companies. When used departmentally, application costs need not be capitalized but instead can be made operational expenditures. Capabilities can be put to use quickly, as vendors develop them, and updates need not disrupt use. Finally, some cloud-based interfaces are more intuitive for end users since they have been designed with the user experience in mind. Regarding cloud technology, our business technology innovation research finds that usability is the most important technology evaluation criterion (for 64% of participants), followed by reliability (54%) and capability (%).

vr_bti_why_companies_dont_use_cloudFor analytics and BI specifically, there are still issues holding back adoption. Our research finds that a primary reason companies do not deploy cloud-based applications of any sort are security and compliance issues. For analytics and business intelligence, we can also include data related activities as another reason since cloud-based approaches often require data integration and transmission of sensitive data across an external network along with a range of data preparation. Such issues are especially prevalent for companies that have legacy BI tools using data models that have been distributed across their divisions. Often these organizations have defined their business logic and metrics calculations within the context of these tools. Furthermore, these tools may be integrated with other core applications such as forecasting and planning. To re-architect such data models and metrics calculations is a challenge some companies are reluctant to undertake.

In addition, despite widespread use of some types of cloud-based systems, for nontechnical business people discussions of business intelligence in the cloud can be confusing, especially when they involve information integration, the types of analytics to be performed and where the analytic processes will. The first generation of cloud applications focused on end-user processes related to the various lines of business and largely ignored the complexities inherent in information integration and analytics. Organizations can no longer ignore these complexities since doing so exacerbates the challenge of fragmented systems and distributed data. Buyers and architects should understand the benefits of analytics in the cloud and weigh these benefits against the challenges described above.

Our upcoming benchmark research into data and analytics in the cloud will examine the current maturity of this market as well opportunities and barriers to organizational adoption across line of business and IT. It will evaluate cloud-based analytics in the context of trends such as big data, mobile technology and social collaboration as well as location intelligence and predictive analytics. It will consider how cloud computing enables these and other applications and identify leading indicators for adoption of cloud-based analytics. It also will examine how cloud deployment enables large-scale and streaming applications. For example, it will examine real-time processing of vast amounts of data from sensors and other semistructured data (often referred to as the Internet of Things).

It is an exciting time to be studying this particular market as companies consider moving platforms to the cloud. I look forward to receiving any qualified feedback as we move forward to start this important benchmark research. Please get in touch if you have an interest in this area of our research.

Regards,

Tony Cosentino

VP and Research Director

Our benchmark research consistently shows that business analytics is the most significant technology trend in business today and acquiring effective predictive analytics is organizations’ top priority for analytics. It enables them to look forward rather than backward and, participate organizations reported, leads to competitive advantage and operational efficiencies.

In our benchmark research on big data analytics, for example, 64 percent of organizations ranked predictive analytics as the most Untitledimportant analytics category for working with big data. Yet a majority indicated that they do not have enough experience in applying predictive analytics to business problems and lack training on the tools themselves.

Predictive analytics improves an organization’s ability to understand potential future outcomes of variables that matter. Its results enable an organization to decide correct courses of action in key areas of the business. Predictive analytics can enhance the people, process, information and technology components of an organization’s future performance.

In our most recent research on this topic, more than half (58%) of participants indicated that predictive analytics is very important to their organization, but only one in five said they are very satisfied with their use of those analytics. Furthermore, our research found that implementing predictive analysis would have a transformational impact in one-third of organizations and a significant positive impact in more than half of other ones.

In our new research project, The Next Generation of Predictive Analytics, we will revisit predictive analysis with an eye to determining how attitudes toward it have changed,  along with its current and planned use, and its importance in business. There are significant changes in this area, including where, how, why, and when predictive analytics are applied. We expect to find changes not only in forecasting and analyzing customer churn but also in operational use at the front lines of the organization and in improving the analytic process itself. The research will also look at the progress of emerging statistical languages such as R and Python, which I have written about.

vr_predanalytics_benefits_of_predictive_analytics_updatedAs does big data analytics, predictive analytics involves sourcing data, creating models, deploying them and managing them to understand when an analytic model has become stale and ought to be revised or replaced. It should be obvious that only the most technically advanced users will be familiar with all this, so to achieve broad adoption, predictive analytics products must mask the complexity and be easy to use. Our research will determine the extent to which usability and manageability are being built into product offerings.

The promise of predictive analytics, including competitive advantage (68%), new revenue opportunities (55%), and increased profitability (52%), is significant. But to realize the advantages of predictive analytics, companies must transform how they work. In terms of people and processes a more collaborative strategy may be necessary. Analysts need tools and skills in order to use predictive analytics effectively. A new generation of technology is also becoming available where predictive analytics are easier to apply and use, along with deploy into line of business processes. This will help organizations significantly as there are not enough data scientists and specially trained professionals in predictive analytics that will be available for organizations to utilize or afford to hire.

This benchmark research will look closely at the evolving use of predictive analytics to establish how it equips business to make decisions based on likely futures, not just the past.

Regards,

Tony Cosentino

VP & Research Director

Tony Cosentino – Twitter

Stats

  • 54,016 hits
Follow

Get every new post delivered to your Inbox.

Join 99 other followers

%d bloggers like this: