Friday, June 10, 2016

STRUCTURE SECURITY 2016 San Francisco

By its nature, technology is the world’s fastest-moving industry. So too are cybersecurity threats. Many of today’s biggest cybersecurity issues can be attributed to the failure of technology companies to keep up with the advances of cyber threats or, in too many cases, failing to prioritize these threats as a core part of their operational plan.

 In a hyper-connected world, digital transformation of the global economy depends more than ever on the ability of companies to intelligently implement strategies and solutions that protect their employees, customers, partners, and assets. There’s no doubt that in 2016, cybersecurity has become a boardroom issue, but across many executive teams there remains little more than a rudimentary understanding of the problem's complexity and the significant challenges in developing, implementing and executing a cybersecurity strategy across an enterprise and its ecosystem. 

This fall, I'm looking forward to attending Structure Security which will examine the evolution of the threat landscape and highlight best practices that security professionals use to protect the world’s largest companies and institutions. With speakers from Facebook, Slack, Bromium, the FBI, ARM, RSA Security, Google, Blackstone and more, there should be valuable insight given trends, security and IoT, government and cloud. Join me this September at Structure Security. 

I am pleased to announce that I have a discount code to share with with you - Get 30% off with Discount Code: BIGDATASV for 30% off tickets (Link automatically adds code upon check-out)

The Future of Security
September 27-28, 2016
Golden Gate Club, San Francisco

Sunday, February 2, 2014

What’s the Big Deal About Big Data?

This is a summary of the big data presentation by Andrew McAfee at the Oracle CIO summit. McAfee is the principal research scientist at MIT’s Center for Digital Business.

Antonie Philips van Leeuwenhoek, the inventor of the microscope, could very well be the patron saint of big data. Through his invention, we have been able to see deeper into things. Because of that, we have also been able to measure things better. With better measurement comes deeper understanding, or even a completely different understanding. In other words, by inventing the microscope, Leeuwenhoek opened up a whole new perspective for mankind.

What Leeuwenhoek did for the world in the eighteenth century, big data is doing today. By giving us a better insight into the world, big data magnifies our ability to measure and understand all the things we truly care about.

Take the case of the bar code, for instance. This relic of old technology helped us know what was being bought, how fast it was being bought, and when it was being bought. But there was a blind spot to the bar code. It did not tell you what else the buyer looked at but did not buy. What did the buyer already pick up but did not bring to the cashier for checkout? How did the buyers respond to our coupons? Without these information, you are basing business decisions on a grossly incomplete picture. Big data completes that picture, so we are better able to measure and understand customer behavior, and thus we can improve our marketing strategies and sales.

The good news is that these days, data is plentiful. In fact, the problem is no longer data scarcity, but data explosion. There is so much data around, it’s hard to sift through it or even just keep up with it.

How much data is around, anyway?

In little more than a decade, our computers have gone from kilobytes to megabytes to gigabytes to terabytes. Now we are looking at petabytes, exabytes, and pretty soon, zetabytes. If we are to believe Moore’s law (and we do), the day is not far off when we will run out of prefixes to “bytes” and we’ll have to invent new prefixes to be able to describe the amount of data we are accumulating and receiving day by day.

But that’s a good thing, because big data can help us make decisions and predictions with an accuracy we’ve never seen before. Take, for instance, the Google self-driving car. It’s an amazing product made possible by big data. Although it may be unnerving at first to ride in a car that moves without a driver, it only takes a few minutes into the ride for you to realize that this car is in fact a better driver than most humans you’ve ever met (including yourself). Thanks to the car’s sensors and its access to a vast information database, it is able to slow down, speed up, stop, go, and turn exactly when and where it should.

Imagine the implications if we use big data technology to drive companies instead of cars: Performance gaps among companies will grow. Research has shown that companies that base their decision-making on the opinion of the highest paid person in the room consistently lose to companies whose decisions are data driven. The highest performers are pulling away from the low performers.

Therefore, it is high time for us to be more scientific about our decision-making process. We need to use data to shape our predictions and interventions, instead of merely using it to support our premade decisions. We need to ask the right questions that will point us to the right data, from the vast sea of data available. We need to get people who know what problems need to be fixed, what questions to ask; then we need bring these people together with those who know how to find the answers using big data.

When we are able to form a team like that, only then can we compete and succeed in the big data driven future we’re heading into.



Saturday, December 21, 2013

Big Data: Changing the 2013 Marketing Landscape

Is it really important to consolidate and analyze big data, i.e. data from social media, click data from the website and data coming from mobile devices and backends.

Yes, it is important. According to a recent study done by Gleanster, the top performing companies of 2012 are the ones that use tangible measurements and data from mobile commerce, social commerce, Web analytics, etc., to make their marketing decisions. In other words, we can’t keep on making decisions based on creativity and gut feel anymore. If you keep ignoring the data out there, your company is going to be left behind.

Now, how do the top performers make big data manageable?

First, they use marketing automation tools, the most common of which is the batch email campaign. The other methods used, in order of popularity, are drip email campaigns, CRM integration, multi-channel campaigns, and trigger email campaigns.

Second, the top performers invest in system consolidation. This is important because most organizations use several different marketing technologies – such as social media monitoring, mobile commerce, email marketing, and landing page data harvesting – which often do not integrate so that the company can have a single record system for the data that these technologies collect. If system consolidation existed, it would be much easier to collect and analyze pertinent data.

Of course, system consolidation does not appear out of thin air. You need skilled resources, such as IT people who know how to create the hardware and software that you need for consolidating big data, and marketers who know how to use analytics to make and support justifiable decisions.

One skill that a good marketer should have is knowing how to find the right data. As previously mentioned, big data involves a flood of information. You need to sort through that information, to decide which is important and which is not.

And to know which is important, you need to know the right questions to ask. The right data is the answer to the right question.

Now let’s get to the gist: How is big data going to change the marketing landscape in 2013?

  1. Automated engagement. Presuming that you have achieved system consolidation, your next step will be to get full customer engagement. Business rules will be designed to automate how your company will interact with your customers based on how these customers behave.
     
    Inevitably, this will need financial resources, so your company will need to review its resources and chuck out what legacy systems are no longer working so that they can replace them with what does work.
       
  2. Change in roles. In the past, marketing was marketing, and IT was IT. But as big data becomes more and more crucial to marketing, marketers need to learn to communicate with IT. And while marketers in the past were hired for their creativity, the marketers of the present and the future will be hired for their deep analytical skills and ability to make decisions based on big data findings.
       
  3. Analytics. As data becomes more accessible to marketers, new insights can be made. Instead of being limited to basic questions such as “How did our marketing campaign go?” we can ask more complex questions such as “What’s our marketing mix?” or “When is the best time to contact people, and is it better to call or email them?”

It is these questions and answers that make big data so crucial to marketing. When the marketer talks to IT, the marketer needs to be able to tell IT what sort of answers he or she is looking for. When the marketer tries to justify the request for new budget in the coming year, these questions and answers are the justification.

With big data, marketing in 2013 becomes largely a matter of knowing the right questions – because the answers are all already there.

Thursday, November 28, 2013

Thank You!

We will like to thank all our followers, well wishers and supporters on this Thanksgivings day! Happy thanksgiving!!


Tuesday, October 9, 2012

Using Lean Agile Methodologies for Planning & Implementing a Big Data Project @ "Data Informed Live!" on Dec. 10


I am scheduled to speak at the "Data Informed Live!" event being held at San Jose, California on December 10-11, 2012 at the San Jose Mariott. The event is focused on planning and implementing big data projects. This 2 day event targets business and IT managers with a goal of imparting them with the knowledge they need to develop and execute a "big data" plan for their companies.  

Click here to register.
 
The first day of this two day event is dedicated to planning aspects while day 2 focuses on implementation success factors. I am speaking on day 1,  and my talk is about  using lean agile methodologies for defining product requirements. Everyone knows that project requirements change for data and software related projects as things start taking shape, specially when the project involves new concepts and technologies such as big data, yet most traditional project management treat requirement changes as exceptions. I will be talking about how the agile requirement gathering and product design approach embraces change; and because change is anticipated in the agile project development frameworks, it allows projects to stay on track.   

I believe that  traditional requirements gathering processes does not work for big data projects because the end users can't yet fully grasp the full capabilities and power of big data and hence can not describe they need. 

An iterative agile approach where requirement gathering, design and implementation are done in small (2 to 4 week long) iterations allows end users to visualize what can be done and what is needed and help development team understand how long it takes. It also allows the project to continue to move ahead while providing flexibility to accommodate changes as end users discover new requirements and developers figure out technical nuances. My session will explain how the agile approach works, provides advice for using it, and gives real-world examples of how others have used it successfully.

Whether you are planning your "Big Data" project, or implementing it, "Data Informed Live!" will  prepare you for achieving success in your endeavors by covering the following critical issues: 

- Process: The key processes which both impact and will get impacted by the proposed big data project
- Organization: How to design and re-engineer your organization to implement and utilize big data
- Tools, Platforms and Technology: Understand what platforms and tools can be utilized to assist you in the design and implementation of the big data project.  

Click here to register or to find out more. 





advertisement

Looking for a Copywriter in Denver Colorado? Look no more, Michelle Lopez can help you! 




Friday, May 11, 2012

Oracle Unfolds Details of Its Arsenal of Speed-of-Thought Big Data Analytics Tools

May 3, San Francisco – Today, Oracle laid out details of its “speed of thought” big data analytics suite at its Big Data and Extreme Analytics Summit.

The Oracle Exalytics in-memory BI machine forms the core of the analytics suite. Oracle Exalytics is a co-packaged combination of Sun hardware and various in-memory analytics software tools which have been co-designed for optimal combined performance. The set of analytics tools include Oracle’s in-memory BI Foundation, an architecturally unified business intelligence solution which caters to reporting, ad hoc query, and analysis needs. It also includes the Endeca Information Discovery tool, which was acquired by Oracle in December of last year. This combination of co-optimized hardware and software is touted as the “speed of thought” analytics which allows advanced, intuitive exploration and analysis of data – whether structured or unstructured.
 
Oracle Exalytics can work against the data from both the Oracle Big Data Appliance (a distribution of Hadoop with matching co-designed Sun server) and Oracle RDBMS.

Endeca, a suite of applications for unstructured data management, Web commerce, and business intelligence which was acquired by Oracle last year in December, has been positioned as the primary information discovery tool in Oracle’s BI suite.

The combination of Oracle Exalytics hardware and matching BI software gives businesses access to both structured and unstructured information from different sources using just one interface, thus giving them the ability to gain a deep, wide, and on-time view of their customers and allowing them to do in-context exploration of their business data in a timely manner. In addition to answering questions of whether a business has increased its revenues or missed its sales targets, businesses today want – need – to know why.
 
In the past, to gain such information, businesses have had to go to separate data sources such as promotions data, time frames, locations, channels – and still manage to get only an incomplete view of what they’re looking for, as other influencing factors such as current events and climactic changes may inevitably get missed.

With the Oracle Exalytics In-Memory machine and Oracle BI tools, businesses will be able to gather information, both structured and unstructured, from a far wider spectrum of sources – survey companies, CMS systems, customer reviews, tweets, news reports – and consolidate them all into one single view. And then, to facilitate big data analysis, the software allows you to arrange certain data according to demographical categories such as gender, age, or location.

The key features of the co-engineered Oracle Exalytics In-Memory BI machine and software were described as

  • advanced data visualization and exploration to quickly provide actionable insight from large amounts of data. Oracle claims that the software can be used by somebody with no previous training, thanks to its user-friendly interface;
  • the software allows businesses to download data as is, without need for costly cleansing, so iteration and evolution can be accelerated;
  • faster than competitive solutions for business intelligence, modeling, forecasting, and planning applications; and
  • comprehensiveness. The hybrid search/analytical database was designed to be able to collect and compile all the information – regardless of source, format, or type – that a business needs to make informed critical decisions.

Saturday, April 28, 2012

IBM Makes a Big Deal About Big Data by Acquiring Vivisimo

IBM has confirmed that is has made a definite arrangement towards the acquisition of Vivisimo, a Pittsburgh-based privately held company and a leading provider of navigation and federated discovery software that companies use in accessing and analyzing big data.

No financial terms were revealed during the announcement.

Vivisimo software is well known for its ability to collect and deliver high-quality information over the widest range of data sources from every format and location. This software not only automates the searching and collection of data, it also helps human users to navigate with a singly enterprise-wide view, allowing them to get important insights, resulting in better solutions for challenges encountered during operation.

This acquisition of Vivisimo by IBM speeds up the latter’s initiatives toward big data analytics with advanced federated capabilities, as it allows businesses and other entities to access, view, and analyze the complete repertoire of available data, both structured and unstructured, without having to transfer this data to another location.

As IBM combines its capabilities for big data analytics with Vivisimo’s software, IBM’s efforts towards automating data flow to business analytics applications will be getting a good push forward, resulting in greater capabilities for assisting clients in understanding customer behavior, managing network performance and customer churn, performing data-intensive marketing campaigns, and detecting fraud even as it happens.

"Navigating big data to uncover the right information is a key challenge for all industries," said IBM Information Management general manager Arvind Krishna. "The winners in the era of big data will be those who unlock their information assets to drive innovation, make real-time decisions, and gain actionable insights to be more competitive."

“As part of IBM, we can bring clients the quickest and most accurate access to information necessary to drive growth initiatives that increase customer satisfaction, streamline processes, and boost sales,” said Vivisimo CEO John Kealey.

According to IBM estimates, 2.5 billion terabytes of data are created per day from mobile phones, tablets, social media, sensors, and many other sources. The sheer quantity of this data makes it difficult for businesses to analyze them thoroughly so that they can be used to maximize company efficiency, competitiveness, and profitability.

Vivisimo has more than a decade’s experience in harvesting and navigating humongous amounts of data, helping business get full value from their data and content. Vivisimo distinguishes itself from similar software by its ability to search and index data from multiple repositories. Currently, it is serving over 140 clients from the financial industry, consumer goods, electronics, manufacturing, life sciences, and government. Some of the bigger names it serves include Procter & Gamble, the US Navy and the US Air Force, the Defense Intelligence Agency, LexisNexis, Bupa, and Airbus.

Upon the completion of IBM’s purchase of Vivisimo, around 120 of the latter’s employees will be transferred to IBM.