Big Data Essay ( 4 Pages

User Generated

fhaalsybjre

Business Finance

Description

Big Data Essay:

1. Using the two articles I upload !! the first two file are the screenshot for article 1,

2. Comment on the pros and cons of Big Data

3. PLUS AT LEAST ONE NEW ARTICLE YOU FOUND

(MLA, 4 pages 12 point double spaced, work cited need ! )


follow all the instructions read the article seriously


Unformatted Attachment Preview

Credit Credit Minh Uong/The New York Times By Eduardo Porter • March 6, 2018 ◦ • Leer en español Should Facebook pay us for our puppy pictures? Of course, the idea sounds crazy. Posting puppies on Facebook is not a chore. We love it: Facebook’s 1.4 billion daily users spend the better part of an hour on it every day. It’s amazing that we don’t have to pay for it. And yet the idea is gaining momentum in Silicon Valley and beyond: Facebook and the other technological Goliaths offering free online services — from which they harvest data from and about their users — should pay for every nugget of information they reap. The spring break pictures on Instagram, the YouTube video explaining Minecraft tactics, the internet searches and the Amazon purchases, even your speed following Waze on the way to spend Thanksgiving with your in-laws — this data is valuable. It will become more valuable, potentially much more so, in the not-too-distant future. Getting companies to pay transparently for the information will not just provide a better deal for the users whose data is scooped up as they go about their online lives. It will also improve the quality of the data on which the information economy is being built. And it could undermine the data titans’ stranglehold on technology’s future, breathing fresh air into an economy losing its vitality. Advertisement The idea has been around for a bit. Jaron Lanier, the tech philosopher and virtual-reality pioneer who now works for Microsoft Research, proposed it in his 2013 book, “Who Owns the Future?,” as a needed corrective to an online economy mostly financed by advertisers’ covert manipulation of users’ consumer choices. It is being picked up in “Radical Markets,” a book due out shortly from Eric A. Posner of the University of Chicago Law School and E. Glen Weyl, principal researcher at Microsoft. And it is playing into European efforts to collect tax revenue from American internet giants. In a report obtained last month by Politico, the European Commission proposes to impose a tax on the revenue of digital companies based on their users’ location, on the grounds that “a significant part of the value of a business is created where the users are based and data is collected and processed.” Users’ data is a valuable commodity. Facebook offers advertisers precisely targeted audiences based on user profiles. YouTube, too, uses users’ preferences to tailor its feed. Still, this pales in comparison with how valuable data is about to become, as the footprint of artificial intelligence extends across the economy. Data is the crucial ingredient of the A.I. revolution. Training systems to perform even relatively straightforward tasks like voice translation, voice transcription or image recognition requires vast amounts of data — like tagged photos, to identify their content, or recordings with transcriptions. Advertisement “Among leading A.I. teams, many can likely replicate others’ software in, at most, one to two years,” notes the technologist Andrew Ng. “But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.” We may think we get a fair deal, offering our data as the price of sharing puppy pictures. By other metrics, we are being victimized: In the largest technology companies, the share of income going to labor is only about 5 to 15 percent, Mr. Posner and Mr. Weyl write. That’s way below Walmart’s 80 percent. Consumer data amounts to work they get free. “If these A.I.-driven companies represent the future of broader parts of the economy,” they argue, “without something basic changing in their business model, we may be headed for a world where labor’s share falls dramatically from its current roughly 70 percent to something closer to 20 to 30 percent.” As Mr. Lanier, Mr. Posner and Mr. Weyl point out, it is ironic that humans are providing free data to train the artificial-intelligence systems to replace workers across the economy. Commentators from both left and right fret over how ordinary people will put food on the table once robots take all the jobs. Perhaps a universal basic income, funded by taxes, is the answer? How about paying people for the data they produced to train the robots? If A.I. accounted for 10 percent of the economy and the big-data companies paid two-thirds of their income for data — the same as labor’s share of income across the economy — the share of income going to “workers” would rise drastically. By Mr. Weyl and Mr. Posner’s reckoning, the median household of four would gain $20,000 a year. A critical consideration is that if people were paid for their data, its quality and value would increase. Facebook could directly ask users to tag the puppy pictures to train the machines. It could ask translators to upload their translations. Facebook and Google could demand quality information if the value of the transaction were more transparent. Unwilling to enter in a direct quid pro quo with their users, the data titans must make do with whatever their users submit. The transition would not be painless. We would need to figure out systems to put value on data. Your puppy pictures might turn out to be worthless, but that college translation from SerboCroatian could be valuable. Barred from free data, YouTube and Facebook might charge a user fee for their service — like Netflix. Alternatively, they might make their money from training A.I. systems and pay some royalty stream to the many people whose data helped train them. Advertisement But whatever the cost, the transformation seems worthwhile. Notably, it could help resolve one of the most relevant questions coming into focus in this new technological age: Who will control the data? Today, the dominant data harvesters in the business are Google and Facebook, with Amazon, Apple and Microsoft some way behind. Their dominance cannot really be challenged: Could you think of a rival search engine? Could another social network replace the one all your friends are on? This dominance might matter less if companies had to pay for their users’ data. Google and Facebook and Amazon would not be able to extend the network effects that cemented their place at the top of the technology ecosystem to the world of A.I. Everybody wants to be on Facebook because everybody’s friends are on Facebook. But this dominance could be eroded if rivals made direct offers of money for data. Companies with different business models might join the fray. “This is an opportunity for other companies to enter and say look, we will pay you for this data,” Mr. Posner said. “All this is so new that ordinary people haven’t figured out how manipulated they are by these companies.” The big question, of course, is how we get there from here. My guess is that it would be naïve to expect Google and Facebook to start paying for user data of their own accord, even if that improved the quality of the information. Could policymakers step in, somewhat the way the European Commission did, demanding that technology companies compute the value of consumer data? In any event, there is probably a better deal out there, in your future, than giving Facebook free puppy pictures. A version of this article appears in print on March 7, 2018, on Page B1 of the New York edition with the headline: Getting Tech Giants to Pay You for Your Data. Order Reprints | Today’s Paper | Subscribe • Related Coverage Good News: A.I. Is Getting Cheaper. That’s Also Bad News. Feb. 20, 2018 Image Big Profits Drove a Stock Boom. Did the Economy Pay a Price? Feb. 13, 2018 Image Where Are the Start-Ups? Loss of Dynamism Is Impeding Growth Feb. 6, 2018 Image Is the Populist Revolt Over? Not if Robots Have Their Way Jan. 30, 2018 Image Advertisement Trending on NYTimes 1 Opinion: What if Trump Did Actually Shoot Someone on Fifth Avenue? 2 Opinion: Why It Can Happen Here 3 Opinion: Distracted? Work Harder! 4 Ethan Hawke Is Still Taking Ethan Hawke Extremely Seriously Advertisement Links Not Found permission to access it. Content Open Quick The specified resource was not found, or you do not have Sunday, September 9, 2018 10:49:40 PM EDT OK Collapse Expand Site Index Go to Home Page » NEWS OPINION ARTS LIVING LISTINGS & MORE Site Information Navigation © 2018 The New York Times Company Contact UsWork with usAdvertiseYour Ad ChoicesPrivacyTerms of ServiceTerms of SaleSite MapHelpSubscriptions Productivity drains Too often, analytics executives spend their time on activities such as managing the IT infrastructure to ensure data collection is happening or deploying analytic workflows by re-coding processes, rather than actually building new data sets with different data sources. To break down this barrier, analytics executives need to: • Make it clear that innovation occurs when analytics executives can actually perform analytics functions, such as identifying where data can make the best impact on the business. Work with management to identify other resources to manage the infrastructure. Calculate the time you spend on lower level tasks that can be automated before approaching the C-suite to make a case for additional support. Money always talks, and inefficiency often comes with a big price tag. Innovation occurs when analytics executives can actually perform analytics functions, such as identifying where data can make the best impact on the business. Da Use case: identifying where data can make an impact Surgeons at the University of Iowa Hospitals and Clinics needed to know if patients were susceptible to infections in order to make critical treatment decisions to reduce the infection rate. The hospital used Statistica to analyze data and make real-time predictions in the operating room. As a result, surgical site infections decreased by 58 percent, improving patient health and reducing costs, as well. and technical (systems) expertise. In fact, according to an article by. Louis. Columbus in Forbes summarizing research by WANTED Analytics, in 2014, demand for computer systems analysts with big data expertise increased 89.9 percent, and demand for computer and information research scientists jumped 85.4 percent. As the need for corporate analytic initiatives and professionals grows, analytics executives are becoming increasingly important within organizations. Although their specific titles vary by industry, they tend to have either a specific background in analytics or emerge from a functional area within the business. The latter have built deep knowledge of systems and processes within their line of business, positioning them to apply insights and determine what problems need to be solved and how data can help. Few analytics executives come from an IT background. New technologies are needed as well We've all seen the stats: Data is growing at a rate faster than we've ever seen and comes in all shapes and sizes. Moreover, it is becoming more diverse along multiple dimensions: Structured versus unstructured data – The ratio of structured data (fixed- field records and files) to unstructured data (such as digital images, video and long-form text) is shifting as organizations collect and store an increasing amount of unstructured information from a variety of channels. On-premises versus in the cloud – As recently as 10 years ago, companies had very little data in the cloud. Now, much more corporate data is accessed from or stored in the cloud, and CSC's Data rEvolution report predicts that by 2020, more than one third of the data produced will live in or pass through the cloud. . . At rest versus in motion – Moreover, data does not stay put. Data sets for analysis often must include data in motion, such as data coming in from machines, transactions, mobile devices and sensors. To accommodate this broad mix of data, organizations are supplementing traditional relational databases with NoSQL databases, database appliances, and cloud and open-source technologies. Apache Hadoop and Apache Spark are open-source technologies that offer new methods for storing data, running advanced analytics on it and running applications on clusters of commodity hardware. In fact, according to a recent analytics study by Enterprise Management. Associates (EMA), 56 percent of respondents identified cloud-based analytics as an important component to their analytics strategy. As infrastructure shifts to emerging platforms, your analytics platform will need the capability to analyze the data where it lies (that is, using in-database analytics) rather than shuffling data back and forth as with the systems of old. . Data preparation and blending: prerequisites for analytics Add self-service tools With a shortage of skilled data management staff to perform complex data integration, reporting and analysis work, organizations should invest in self-service tools that enable users with limited IT or analytics expertise to more easily complete these tasks. Some solutions leverage crowdsourcing to share integration methods known to work well. Use case: aggregating heterogeneous data Shire, a global specialty biopharmaceutical company, had been using a complex network of heterogeneous systems, including LIMS, Excel and JMP, to manage vast amounts of manufacturing data. Employees could manually collect and report on key findings, but ensuring accuracy was extremely time-consuming. Before analytics can begin, data must be collected, aggregated and prepared for analysis. Specifically, analytics executives must ensure that: All data, wherever it lives, can be accessed. The organization selected Statistica, which provides a validated single point of entry for data capture across processes and locations. Business users were able to publish charts, graphs and reports to a web portal, improving accuracy while saving considerable time. • The data is blended and prepared in a way that ensures it is consistent and coherent, making use of any master data management (MDM) capabilities within the organization. For example, units of measurement should be standardized. The aggregation process must also respect data privacy. The analytics workflow queries the data in a secure and validated way. That is, users should be able to retrieve and see only the data that they are entitled to see. Don't put yourself in the position of having to omit data just because it will be too difficult to aggregate it with other data. To address these challenges effectively, keep the following best practices in mind: Choose solutions that can integrate data from across the enterprise as well as inside and outside your firewall – Adding new and valuable data to applications and analytics creates new opportunities for organizations to differentiate themselves and create com ive advantage. Don't put yourself in the position of having to omit data just because it will be too difficult to aggregate it with other data you are using. Automate data blending and preparation tasks – To meet the challenges of complexity and a wider end user community, organizations need solutions that provide business users with self-service capabilities. Save time and drive return on investment (ROI) by deploying data preparation tools that automate query, aggregation, data quality and transformation tasks. . Toad Data Point Solutions that complement Statistica This cross-platform query and data integration solution simplifies data access, analysis and provisioning. It provides nearly limitless data connectivity, desktop data integration, visual query building and workflow automation. You can easily access data in traditional relational databases and non-traditional sources such as Salesforce.com, BusinessObjects and Oracle Business Intelligence Enterprise Edition (OBIEE), as well as NoSQL sources such as Hadoop, Cassandra, MongoDB, SimpleDB and DynamoDB. In addition to Statistica, analytics executives have turned to other Dell Software solutions to complement its functionality. Boomi The industry's largest integration platform as a service (PaaS) enables you to combine cloud and on-premises applications – without software or appliances. You can sync data between business-critical applications, either on premises or in the cloud, such as Salesforce.com, while eliminating the costs of legacy middleware, appliances and custom code. Toad Intelligence Central This centralized repository enables efficient, well-governed collaboration. Users can centrally publish work developed in Toad Data Point or share PL/SQL code and code analysis reports from Toad for Oracle. Dell portfolio (hardware and software) Benefits Analytics portfolio O Advanced analytics Q Predict and optimize the future Statistica Predictive analytics • Machine learning Data mining Test analytics • Forecasting Optimization . Business intelligence Understand historical events Statistica Monitoring and alerting Validated and auditable · Automated and repeatable Key components to complete the Data → Prediction → ROI value chain Integration Real-time data movement on and off premises Boomi Flexible data connectors to cloud, cloud/on-premises, integration Management - Improve performance of the data platforms Toad Data Point & Toad Intelligence Central Heterogeneous data sources, complex joins, staging repository Infrastructure Put the right data in the right place at the right time Figure 3. Dell offers a complete portfolio of analytics solutions to meet the needs IT challenges Legacy systems Legacy systems, and the languages used to build on or pull data from those systems, are often proprietary, which limits innovation. For instance, it would be extremely difficult to invoke an analytics workflow or method from a web service via a legacy platform. Nevertheless, the cost to rip and replace a legacy platform is high, so even if analytics executives know a better long-term solution exists, an overhaul might be vetoed by those holding the purse strings. To break down this barrier, analytics executives should: . Call attention to hidden costs. Yearly license renewals and maintenance could cost millions, meaning a replacement should break even more quickly than executives might think. Look for modular and embeddable analytics technology that can fit into or leverage existing systems. Consider the alternative to replacing the legacy solution: working around it. . Look for simple solutions. There's no need to pay for a high-end system with many bells and whistles if you need to complete a fairly routine task. Avoid making a similar mistake by thinking twice about deploying a single system to solve all your data, infrastructure and analytics challenges. While it may seem there is simplicity in deploying a system from a single vendor, it's difficult to truly get everything you need within one platform. Again, modularity is key. Look for analytics technology that can fit into or leverage existing systems. Modularity is key.
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Please find attached. Let me know if you need edits.

Running Head: BIG DATA.

1

Institutional affiliation:
Student’s Name:
Course Code:
Date:

BIG DATA

2
Introduction

Big data is the modern mode of accessing, analyzing, and processing data that
exceeds the capacity of the traditional data analysis and processing capacity. The concept
obtains the name from the processes such as assessing large volumes of information that is
not structured. The following are the core advantages of using big data in various technical
fields in the modern era and adopting it in firms.
Big data applies diversely in different platforms. They include the diversity in the user
needs, the constantly varying consumer trends, and the reflection on the scope of increased
roles at the workplace despite the adoption of the new technological ways of statistical
analysis in the matter in favor of justice. Thus, adopting the use of the big data needs
significant self-assessment to reflect on the pros and cons of applying the approach in the
industry.
The disadvantages of adopting big data in a firm
Big data uses unstructured data as the input (Bottles, Begoli & Worley, 2014).
However, the output from the big data offers structured data as the feedback. One of the
standard approaches is to ensure that one has an option to store information provided by the
big data incoming information resource center. Thus, one must always participate in
information aimed at preserving data which emerges as a central aspect in the adoption of big
data practices in firms.
It is a common theme that having information puts one in an advantageous position.
In this respect, information can bring great benefits for ...


Anonymous
Really great stuff, couldn't ask for more.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags