Posted on Leave a comment

Some Cloud-Based AI Systems Are Returning to On-Premises Data Centers

Some Cloud-Based AI Systems Are Returning to On-Premises Data Centers

AI/ML model training and knowledge-based storage and processing are more costly on a cloud than many thought, and prices for compute and storage equipment have fallen

 

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written By:

Posted on Leave a comment

13 Open Source Projects Transforming AI and Machine Learning

13 Open Source Projects Transforming AI and Machine Learning

From deepfakes to natural language processing and more, the open source world is ripe with projects to support software development on the frontiers of artificial intelligence and machine learning

.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written By:

Posted on Leave a comment

Google’s DeepMind AI cracks 3D structure of nearly all proteins known to science in major breakthrough

Google’s DeepMind AI cracks 3D structure of nearly all proteins known to science in major breakthrough

‘Determining 3D structure of a protein used to take many months or years, it now takes seconds’.

Please Note: this article has been kindly reproduced from the site: independent.co.uk 

Written By:

Google’s DeepMind AI has predicted the 3D structure of nearly all proteins known to science, an advance that can lead to a better understanding of rare genetic diseases, and also help develop new vaccines and drugs.

DeepMind announced on Thursday that its AlphaFold AI has cracked the structure of over 200 million proteins – the entire “universe of proteins” known to scientists.

 

Proteins are the building blocks of life and play myraid roles in the body as structural units, transport molecules, as well as functional catalysts of chemical reactions in the body as enzymes.

The unique 3D structure that each of these proteins takes in the body, by the folding of their constituent amino acid molecule chains, plays a major role in their function.

For decades, biologists have attempted to predict protein structures via expensive experimental means, including the use of painstaking time-consuming methods like X-ray crystallography or electron microscopy.

With the advent of computers, researchers have built virtual models of how amino acid chains making up proteins would fold under different conditions and lead to the overall 3D structure of proteins.

 

With the release of AlphaFold in 2020, over half a million researchers across the world have used the AI application to crack the structure of “nearly all catalogued proteins known to science.”

AlphaFold was exposed to about 100,000 known protein folding structures – already cracked by scientists – from which the AI has learned to decode the rest, the company says.

The latest advance, according to DeepMind, will expand the AlphaFold Protein Structure Database (AlphaFold DB) from nearly 1 million structures to over 200 million structures, with the potential to accelerate progress on important real-world problems “ranging from plastic pollution to antibiotic resistance.”

In the new update, DeepMind has added the predicted structures for proteins found in plants, bacteria, animals, and other organisms, which may help solve important global issues, “including sustainability, food insecurity, and neglected diseases,” the company noted in a statement.

“You can think of it as covering the entire protein universe. We’re at the beginning of a new era now in digital biology,” DeepMind chief Demis Hassabis said at a press briefing.

With the new structure predictions, scientists can better understand if variant forms of the proteins that differ between individuals are linked to diseases.

For example, protein structures predicted by AlphaFold are helping in the development of drugs for neglected tropical diseases like leishmaniasis and Chagas disease – illnesses that disproportionately affect people in poorer parts of the world.

And in April, scientists at the Yale University used AlphaFold’s database to develop a new Malaria vaccine.

By cracking the structure of key proteins in the body, linked to diseases, scientists can model drugs that can effectively activate or take the role of malfunctioning proteins, or suppress those causing problems.

Decoding protein structures do not just aid in curing diseases but can also help engineer solutions for global environmental issues.

For instance, researchers have joined hands with DeepMind’s AI to develop faster-acting enzymes to break down and recycle some of the world’s most polluting single-use plastics.

“AlphaFold is the singular and momentous advance in life science that demonstrates the power of AI. Determining the 3D structure of a protein used to take many months or years, it now takes seconds,” Eric Topol, Founder and Director of the Scripps Research Translational Institute, said.

“AlphaFold has already accelerated and enabled massive discoveries, including cracking the structure of the nuclear pore complex. And with this new addition of structures illuminating nearly the entire protein universe, we can expect more biological mysteries to be solved each day,” Dr Topol added.

 

Posted on Leave a comment

How your AI-enabled enterprise is underusing your people – and what to do about it

How your AI-enabled enterprise is underusing your people – and what to do about it

Eric Schmidt said that he was ‘naive about the impact of what we were doing’ but that ‘arming’ AI could ‘trigger the other side’

Please Note: this article has been kindly reproduced from the site: independent.co.uk 

Three out of four executives fear going out of business in the next five years if they don’t scale artificial intelligence (AI). Why? Because of the revolutionary impact AI can have on enterprises. Not only can AI help increase revenue, reduce costs and better manage risk, it can also improve everyday processes and operations to turn your organisation into an efficiency machine.

 

Many enterprises are already leveraging AI, but often they don’t realise they could be making an even bigger impact if they took advantage of the secret weapon they already have in place – their people. We covered this topic in depth in a recent video interview with Peter Lee, President and Chief Executive Officer at RapidMiner, and Andy Walter, Board & Strategic Advisor at RapidMiner and former SVP at P&G.

How enterprises are underutilising their people

Most businesses don’t prioritize data literacy for everyone, meaning that data skills are only expected of data experts. However, enterprises have a wealth of expertise across the organisation – from IT to marketing to warehouse workers, who all know their area of the business like the back of their hand. Organisations aren’t currently connecting that invaluable business context with data expertise, which is a huge missed opportunity.

Additionally, siloed business functions don’t leave much room to effectively use analytics – there’s no easy way to collaborate. These silos are enforced by organisations choosing to either outsource their data science projects to expensive consultants or hire additional, separated data scientists internally. Both these approaches are short-term solutions at best and leave so much internal talent to waste.

Three strategies for effectively leveraging your people and data

Upskill your talent

Analytics is a team sport, and existing employees (for example, domain experts in different business units) already understand the business context behind the problem data science initiatives are trying to solve. Team members who don’t necessarily have the word “data” in their title can benefit from learning about how to apply data science to their everyday work, while data science experts benefit from understanding the broader business context they’re working to solve.

Executive sponsorship is vital to encourage, and set up a framework for, continuous collaboration. Not only will upskilling existing employees remove the need for recruiting new talent in a fiercely competitive market, but it will also make existing staff feel more valued and appreciated. According to a survey by Tableau and Forrester Research, 80 per cent of employees are more likely to stay at companies that provide them with the data skills they need.

Demonstrate the value of AI

Some employees might be initially sceptical about how artificial intelligence will impact their jobs, and thus what their involvement in AI initiatives will look like. Demonstrating the value of artificial intelligence to employees across the broader organisation shows them how AI can add value to their own work, resulting in increased enthusiasm and buy-in.

Employees can get hands-on experience with AI, produce real results, and start to build AI into the organisation’s cultural foundation. This will lead to increased exposure to data science, greater data literacy and, of course, more valuable results.

Implement the right tools

The final ingredient in ensuring organisations can maximise the potential of AI is having the right tools and resources in place. A multi-persona data science platform – such as RapidMiner –empowers cross-functional employees to execute data science projects and generate real, AI-driven business value on their own.

With a multi-persona tool, anyone in the organisation, regardless of experience, can use AI to automate processes, build new apps and create predictive models. Platforms such as RapidMiner provide a central hub for organisations to collaborate and communicate across all current data science projects too, bringing previously siloed team members closer together.

The message to organisations is clear: now is the time to leverage your in-house talent to deliver on the promise of AI. By teaching your employees new skills, showing them the positive impacts data science will have on their everyday work, and providing them with the tools they need to be successful, you’ll not only break down damaging organisational silos, but you’ll also create a data-centric workplace culture. In the words of Miro Kazakoff, Senior Lecturer at MIT Sloan School of Management, “In a world of more data, the companies with more data-literate people are the ones that are going to win.”

Posted on Leave a comment

Google’s former head says AI is as dangerous as nuclear weapons

Google’s former head says AI is as dangerous as nuclear weapons

Eric Schmidt said that he was ‘naive about the impact of what we were doing’ but that ‘arming’ AI could ‘trigger the other side’

Please Note: this article has been kindly reproduced from the site: independent.co.uk 

Written by:

Google’s former chief executive Eric Schmidt has called artificial intelligence as dangerous as nuclear weapons.

Speaking at the Aspen Security Forum earlier this week, Eric Schmidt said that he was “naive about the impact of what we were doing”, but that information is “incredibly powerful” and “government and other institutions should put more pressure on tech to put these things consistent with our values.”

 

“The leverage that tech has is very, very real. If you think about, how will we negotiate an AI agreement? First you have to have technologists that understand what’s going to happen, and then you have awareness on the other side.

“Let’s say we want to have a chat with China on some kind of treaty around AI surprises. Very reasonable. How would we do it? Who in the US government would work with us? And it’s even worse on the Chinese side? Who do we call? … we’re not ready for the negotiations we need.

“In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned It’s an example of a balance of trust, or lack of trust, it’s a ‘no surprises’ rule.

“I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing…will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum … because you’re arming or getting ready, you then trigger the other side.”

The capabilities of artificial intelligence have been stated – and overstated – numerous times over the years. Tesla chief executive Elon Musk has often said that AI is highly likely to be a threat to humans, and recently Google fired a software engineer who claimed its artificial intelligence had become self-aware and sentient.

 

However, experts have often reminded people that the issue of AI is what it is trained for and how it is used by humans. If the algorithms that train these systems are based on flawed, racist, or sexist data, then the results will reflect that.

Posted on Leave a comment

Using AI to train teams of robots to work together

Using AI to train teams of robots to work together

Researchers have developed a method to train multiple agents such as robots or drones to work together using multi-agent reinforcement learning, a type of artificial intelligence..

Please Note: this article has been kindly reproduced from the site: sciencedaily.com 

Written by: University of Illinois Grainger College of Engineering

When communication lines are open, individual agents such as robots or drones can work together to collaborate and complete a task. But what if they aren’t equipped with the right hardware or the signals are blocked, making communication impossible? University of Illinois Urbana-Champaign researchers started with this more difficult challenge. They developed a method to train multiple agents to work together using multi-agent reinforcement learning, a type of artificial intelligence.

“It’s easier when agents can talk to each other,” said Huy Tran, an aerospace engineer at Illinois. “But we wanted to do this in a way that’s decentralized, meaning that they don’t talk to each other. We also focused on situations where it’s not obvious what the different roles or jobs for the agents should be.”

Tran said this scenario is much more complex and a harder problem because it’s not clear what one agent should do versus another agent.

“The interesting question is how do we learn to accomplish a task together over time,” Tran said.

Tran and his collaborators used machine learning to solve this problem by creating a utility function that tells the agent when it is doing something useful or good for the team.

“With team goals, it’s hard to know who contributed to the win,” he said. “We developed a machine learning technique that allows us to identify when an individual agent contributes to the global team objective. If you look at it in terms of sports, one soccer player may score, but we also want to know about actions by other teammates that led to the goal, like assists. It’s hard to understand these delayed effects.”

The algorithms the researchers developed can also identify when an agent or robot is doing something that doesn’t contribute to the goal. “It’s not so much the robot chose to do something wrong, just something that isn’t useful to the end goal.”

They tested their algorithms using simulated games like Capture the Flag and StarCraft, a popular computer game.

You can watch a video of Huy Tran demonstrating related research using deep reinforcement learning to help robots evaluate their next move in Capture the Flag.

“StarCraft can be a little bit more unpredictable — we were excited to see our method work well in this environment too.”

Tran said this type of algorithm is applicable to many real-life situations, such as military surveillance, robots working together in a warehouse, traffic signal control, autonomous vehicles coordinating deliveries, or controlling an electric power grid.

Tran said Seung Hyun Kim did most of the theory behind the idea when he was an undergraduate student studying mechanical engineering, with Neale Van Stralen, an aerospace student, helping with the implementation. Tran and Girish Chowdhary advised both students. The work was recently presented to the AI community at the Autonomous Agents and Multi-Agent Systems peer-reviewed conference.

Posted on Leave a comment

Some cloud-based AI systems are returning to on-premises data centers

Some cloud-based AI systems are returning to on-premises data centers

AI/ML model training and knowledge-based storage and processing are more costly on a cloud than many thought, and prices for compute and storage equipment have fallen.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by: David Linthicum

As a concept, artificial intelligence is very old. My first job out of college almost 40 years ago was as an AI systems developer using Lisp. Many of the concepts from back then are still in use today. However, it’s about a thousand times less expensive now to build, deploy, and operate AI systems for any number of business purposes.

Cloud computing revolutionized AI and machine learning, not because the hyperscalers invented it but because they made it affordable. Nevertheless, I and some others are seeing a shift in thinking about where to host AI/ML processing and AI/ML-coupled data. Using the public cloud providers was pretty much a no-brainer for the past few years. These days, the valuing of hosting AI/ML and the needed data on public cloud providers is being called into question. Why?

Cost of course. Many businesses have built game-changing AI/ML systems in the cloud, and when they get the cloud bills at the end of the month, they understand quickly that hosting AI/ML systems, including terabytes or petabytes of data, is pricey. Moreover, data egress and ingress costs (what you pay to send data from your cloud provider to your data center or another cloud provider) will run up that bill significantly.

Companies are looking at other, more cost-effective options, including managed service providers and co-location providers (colos), or even moving those systems to the old server room down the hall. This last group is returning to “owned platforms” largely for two reasons.

First, the cost of traditional compute and storage equipment has fallen a great deal in the past five years or so. If you’ve never used anything but cloud-based systems, let me explain. We used to go into rooms called data centers where we could physically touch our computing equipment—equipment that we had to purchase outright before we could use it. I’m only half kidding.

When it comes down to renting versus buying, many are finding that traditional approaches, including the burden of maintaining your own hardware and software, are actually much cheaper than the ever-increasing cloud bills.

Second, many are experiencing some latency with cloud. The slowdowns happen because most enterprises consume cloud-based systems over the open internet, and the multitenancy model means that you’re sharing processors and storage systems with many others at the same time. Occasional latency can translate into many thousands of dollars of lost revenue a year, depending on what you’re doing with your specific cloud-based AI/ML system in the cloud.

Many of the AI/ML systems that are available from cloud providers are also available on traditional systems. Migrating from a cloud provider to a local server is cheaper and faster, and more akin to a lift-and-shift process, if you’re not locked into an AI/ML system that only runs on a single cloud provider.

What’s the bottom line here? Cloud computing will continue to grow. Traditional computing systems whose hardware we own and maintain, not as much. This trend won’t slow down. However, some systems, especially AI/ML systems that use a large amount of data and processing and happen to be latency sensitive, won’t be as cost-effective in the cloud. This could also be the case for some larger analytical applications such as data lakes and data lake houses.

Some could save half the yearly cost of hosting on a public cloud provider by repatriating the AI/ML system back on-premises. That business case is just too compelling to ignore, and many won’t.

Cloud computing prices may lower to accommodate these workloads that are cost-prohibitive to run on public cloud providers. Indeed, many workloads may not be built there in the first place, which is what I suspect is happening now. It is no longer always a no-brainer to leverage cloud for AI/ML.

Posted on Leave a comment

What is behavioral analytics and when is it important?

What is behavioral analytics and when is it important?

The ability to mine large amounts of data to study how users act offers long-reaching business benefits and risk reduction opportunities.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by:

You’re shopping for a car. You visit a manufacturer’s website to learn about model trims, review deals listed on the local dealer’s website, and then visit the dealership. What information can the sales rep review to learn about your purchasing needs and determine the best options to offer you?

The security operations center receives an alert about an employee’s activities on the network. Is the employee learning about different business areas and just working at unexpected hours from a remote location? Or is this malicious behavior and the SOC should take action?

These are examples of insights that user behavior analytics can provide. Common use cases include increasing business-to-business and business-to-consumer sales, improving customer experience, detecting anomalies, alerting on risks, and leveraging data from Internet of Things devices to identify dangerous conditions.

Rosaria Silipo, principal data scientist and head of evangelism at KNIME, offers this simple definition of behavioral analytics. She says, “Behavioral analytics studies people’s reactions and behavior patterns in particular situations.”

Business opportunities in behavioral analytics

Behavioral analytics is particularly important any time a product or service has many people doing numerous things where there are both opportunities to improve outcomes and to reduce risks. Examples include customer buying habits on large-scale e-commerce websites, healthcare applications, gaming platforms, and wealth management in banking. Silipo explains further, “The goal is to study the mass of people, and the key is the availability of large amounts of data.“

Kathy Brunner, CEO of Acumen Analytics, refers to research that the global behavior analytics market is projected to reach $2.2 billion by 2026, from $427.3 million, at a compound annual growth rate of 32% from 2022 to 2026.

Brunner shares these insights on the business opportunities. “The current focus is mainly retail, and why not? Where I see real transformation is in combining this capability with AI/ML, other advanced modeling technologies, and real-world evidence in healthcare data. Imagine the impact from figuring out how best to get patients into clinical trials, improving drug discovery, and advancing patient outcomes with precision and personalized medicine.”

So although behavioral analytics can be an issue if an implementation violates privacy norms or compliance regulations, it can also lead to very positive outcomes for consumers and businesses.

Mitigating risks with behavioral analytics

Behavioral analytics is often used for business opportunities, but the techniques are just as applicable to identify and alert on risks. Behavioral analytics is used in banking for fraud detection, embedded in AIops tools to help improve incident management, and helps gaming systems to identify cheaters.

Large enterprises with many global employees, contractors, and suppliers also leverage behavioral analytics to spot suspicious activities. Isaac Kohen, vice president of research and development at Teramind, says, “User and entity behavior analytics can identify and alert the organization to a wide range of anomalous behaviors. Potential threats can be malicious, inadvertent, or compromised activities by an employee, user, or third-party entity. It is used in many industries to prevent insider threats and analyze user behaviors for compliance and regulatory requirements.”

The data science behind behavioral analytics is often applied to people, customers, and users, but it can also be applied to the entities operating in large-scale systems.

Todd Mostak, CTO and cofounder of Heavy.AI, shares this wider perspective: “Behavioral analytics is a data-driven approach to tracking, predicting, and leveraging behavior data to make informed decisions. With the shipping delays and supply chain shortages today, behavioral analytics technology can monitor the activity of billions of ships, examine ports, and study global shipping patterns to help experts solve these issues.”

The data science behind behavioral analytics

Behavior analytics is a broad application of data science, machine learning, and AI techniques. Scott Toborg, head of data science and analytics products at Teradata, explains the underlying data science. “Behavioral analytics leverages customer data about who they are (demographics), what they are doing (events), and who they interact with (connections) to derive better insights, predictions, and actions. The process consists of segmentation, predictive modeling, and prescriptive action.”

Toborg suggests that behavioral analytics shares many of the same opportunities data science targets but also faces challenges in developing and supporting machine learning models. He continues, “When properly implemented, behavioral analytics results in better customer experiences, more precise targeted marketing, and greater engagement. However, there are challenges, including privacy, model bias, and stereotyping.”

Useful techniques and technologies

Behavioral analytics is a set of operations, data, and technology practices targeted at specific business opportunities or aimed to mitigate a set of quantifiable risks. There are many ways organizations can implement behavior analytics. The list below is a subset of the available solutions.

  • Platforms such as content management, e-commerce, and digital experience often include capabilities to support behavioral analytics.
  • Customer data platforms centralize data on customers and their actions while providing integrations to perform actions on marketing automation platforms, advertising systems, e-commerce, and other platforms.
  • Product analytics and digital experience analytics platforms such as Adobe Analytics, Amplitude, Contentsquare, FullStory, Glassbox, Heap, Mixpanel, and Userpilot aggregate usage metrics and provide analytics capabilities.
  • Media, e-commerce, and other content-rich websites should consider intelligent search platforms that include behavioral analytics, recommendation engines, and personalization capabilities.
  • Techniques to experiment and learn from user responses include A/B testing, user activity recordings, activity measurement tools, and customer feedback measurement practices. These aim to optimize activities based on customer segments and personas.
  • Application developers can use feature flagging to support large-scale A/B feature testing while implementing microservice observability to identify malicious API activities.
  • Organizations can also consider data analytics, analytics process automation, or machine learning platforms to centralize behavioral data and create behavioral analytics capabilities. Some data platforms include Alteryx, Dataiku, Databricks, DataRobot, Informatica, KNIME, RapidMiner, SAS, Tableau, Talend, and many others.
  • IoT, wearable, augmented reality/virtual reality, voice-enabled devices, and cameras with computer vision capabilities all represent new inputs and data sources for capturing behavioral data.

There’s little doubt that more organizations will consider using behavioral analytics capabilities to grow revenue, improve experiences, and reduce risks.

Posted on Leave a comment

How AI is changing IoT

How AI is changing IoT

Artificial intelligence unlocks the true potential of IoT by enabling networks and devices to learn from past decisions, predict future activity, and continuously improve performance and decision-making capabilities.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by: Xavier Dupont

IoT has seen steady adopted across the business world over the past decade. Businesses have been built or optimized using IoT devices and their data capabilities, ushering in a new era of business and consumer technology. Now the next wave is upon us as advances in AI and machine learning unleash the possibilities of IoT devices utilizing “artificial intelligence of things,” or AIoT.

Consumers, businesses, economies, and industries that adopt and invest in AIoT can leverage its power and gain competitive advantages. IoT collects the data, and AI analyzes it to simulate smart behavior and support decision-making processes with minimal human intervention.

Why IoT needs AI

IoT allows devices to communicate with each other and act on those insights. These devices are only as good as the data they provide. To be useful for decision-making, the data needs to be collected, stored, processed, and analyzed.

This creates a challenge for organizations. As IoT adoption increases, businesses are struggling to process the data efficiently and use it for real-world decision making and insights.

This is due to two problems: the cloud and data transport. The cloud can’t scale proportionately to handle all the data that comes from IoT devices, and transporting data from the IoT devices to the cloud is bandwidth-limited. No matter the size and sophistication of the communications network, the sheer volume of data collected by IoT devices leads to latency and congestion.

Several IoT applications rely on rapid, real-time decision-making such as autonomous cars. To be effective and safe, autonomous cars need to process data and make instantaneous decisions (just like a human being). They can’t be limited by latency, unreliable connectivity, and low bandwidth.

Autonomous cars are far from the only IoT applications that rely on this rapid decision making. Manufacturing already incorporates IoT devices, and delays or latency could impact the processes or limit capabilities in the event of an emergency.

In security, biometrics are often used to restrict or allow access to specific areas. Without rapid data processing, there could be delays that impact speed and performance, not to mention the risks in emergent situations. These applications require ultra-low latency and high security. Hence the processing must be done at the edge. Transferring data to the cloud and back simply isn’t viable. 

Benefits of AIoT

Every day, IoT devices generate around one billion gigabytes of data. By 2025, the projection for IoT-connected devices globally is 42 billion. As the networks grow, the data does too.

As demands and expectations change, IoT is not enough. Data is increasing, creating more challenges than opportunities. The obstacles are limiting the insights and possibilities of all that data, but intelligent devices can change that and allow organizations to unlock the true potential of their organizational data.

With AI, IoT networks and devices can learn from past decisions, predict future activity, and continuously improve performance and decision-making capabilities. AI allows the devices to “think for themselves,” interpreting data and making real-time decisions without the delays and congestion that occur from data transfers.

AIoT has a wide range of benefits for organizations and offers a powerful solution to intelligent automation.  

Avoiding downtime

Some industries are hampered by downtime, such as the offshore oil and gas industry. Unexpected equipment breakdown can cost a fortune in downtime. To prevent that, AIoT can predict equipment failures in advance and schedule maintenance before the equipment experiences severe issues.

Increasing operational efficiency

AI processes the huge volumes of data coming into IoT devices and detects underlying patterns much more efficiently than humans can. AI with machine learning can enhance this capability by predicting the operational conditions and modifications necessary for improved outcomes.

Enabling new and improved products and services

Natural language processing is constantly improving, allowing devices and humans to communicate more effectively. AIoT can enhance new or existing products and services by allowing for better data processing and analytics.

Improved risk management

Risk management is necessary to adapt to a rapidly changing market landscape. AI with IoT can use data to predict risks and prioritize the ideal response, improving employee safety, mitigating cyber threats, and minimizing financial losses.

Key industrial applications for AIoT

AIoT is already revolutionizing many industries, including manufacturing, automotive, and retail. Here are some common applications for AIoT in different industries.

Manufacturing

Manufacturers have been leveraging IoT for equipment monitoring. Taking it a step further, AIoT combines the data insights from IoT devices with AI capabilities to offer predictive analysis. With AIoT, manufacturers can take a proactive role with warehouse inventory, maintenance, and production.

Robotics in manufacturing can significantly improve operations. Robots are enabled with implanted sensors for data transmission and AI, so they can continually learn from data and save time and reduce costs in the manufacturing process.

Sales and marketing

Retail analytics takes data points from cameras and sensors to track customer movements and predict their behaviors in a physical store, such as the time it takes to reach the checkout line. This can be used to suggest staffing levels and make cashiers more productive, improving overall customer satisfaction.

Major retailers can use AIoT solutions to grow sales through customer insights. Data such as mobile-based user behavior and proximity detection offer valuable insights to deliver personalized marketing campaigns to customers while they shop, increasing traffic in brick-and-mortar locations.

Automotive

AIoT has numerous applications in the automotive industry, including maintenance and recalls. AIoT can predict failing or defective parts, and can combine the data from recalls, warranties, and safety agencies to see which parts may need to be replaced and provide service checks to customers. Vehicles end up with a better reputation for reliability, and the manufacturer gains customer trust and loyalty.

One of the best-known, and possibly most exciting, applications for AIoT is autonomous vehicles. With AI enabling intelligence to IoT, autonomous vehicles can predict driver and pedestrian behavior in a multitude of circumstances to make driving safer and more efficient.

Healthcare

One of the prevailing goals of quality healthcare is extending it to all communities. Regardless of the size and sophistication of healthcare systems, physicians are under increasing time and workload pressures and spending less time with patients. The challenge to deliver high-quality healthcare against administrative burdens is intense. 

Healthcare facilities also produce vast amounts of data and record high volumes of patient information, including imaging and test results. This information is valuable and necessary to quality patient care, but only if healthcare facilities can access it quickly to inform diagnostic and treatment decisions.

IoT combined with AI has numerous benefits for these hurdles, including improving diagnostic accuracy, enabling telemedicine and remote patient care, and reducing the administrative burden of tracking patient health in the facility. And perhaps most importantly, AIoT can identify critical patients faster than humans by processing patient information, ensuring that patients are triaged effectively.

Prepare for the future with AIoT

AI and IoT is the perfect marriage of capabilities. AI enhances IoT through smart decision making, and IoT facilitates AI capability through data exchange. Ultimately, the two combined will pave the way to a new era of solutions and experiences that transform businesses across numerous industries, creating new opportunities altogether. 

Posted on Leave a comment

What is TensorFlow? The machine learning library explained

What is TensorFlow? The machine learning library explained

TensorFlow is a Python-friendly open source library for numerical computation that makes machine learning and developing neural networks faster and easier.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by:

Machine learning is a complex discipline but implementing machine learning models is far less daunting than it used to be, thanks to machine learning frameworks—such as Google’s TensorFlow—that ease the process of acquiring data, training models, serving predictions, and refining future results.

Created by the Google Brain team and initially released to the public in 2015, TensorFlow is an open source library for numerical computation and large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning models and algorithms (aka neural networks) and makes them useful by way of common programmatic metaphors. It uses Python or JavaScript to provide a convenient front-end API for building applications, while executing those applications in high-performance C++.

TensorFlow, which competes with frameworks such as PyTorch and Apache MXNet, can train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, sequence-to-sequence models for machine translation, natural language processing, and PDE (partial differential equation)-based simulations. Best of all, TensorFlow supports production prediction at scale, with the same models used for training.

TensorFlow also has a broad library of pre-trained models that can be used in your own projects. You can also use code from the TensorFlow Model Garden as examples of best practices for training your own models.

How TensorFlow works

TensorFlow allows developers to create dataflow graphs—structures that describe how data moves through a graph, or a series of processing nodes. Each node in the graph represents a mathematical operation, and each connection or edge between nodes is a multidimensional data array, or tensor.

TensorFlow applications can be run on most any target that’s convenient: a local machine, a cluster in the cloud, iOS and Android devices, CPUs or GPUs. If you use Google’s own cloud, you can run TensorFlow on Google’s custom TensorFlow Processing Unit (TPU) silicon for further acceleration. The resulting models created by TensorFlow, though, can be deployed on most any device where they will be used to serve predictions.

TensorFlow 2.0, released in October 2019, revamped the framework in many ways based on user feedback, to make it easier to work with (as an example, by using the relatively simple Keras API for model training) and more performant. Distributed training is easier to run thanks to a new API, and support for TensorFlow Lite makes it possible to deploy models on a greater variety of platforms. However, code written for earlier versions of TensorFlow must be rewritten—sometimes only slightly, sometimes significantly—to take maximum advantage of new TensorFlow 2.0 features.

A trained model can be used to deliver predictions as a service via a Docker container using REST or gRPC APIs. For more advanced serving scenarios, you can use Kubernetes

Using TensorFlow with Python

TensorFlow provides all of this for the programmer by way of the Python language. Python is easy to learn and work with, and it provides convenient ways to express how high-level abstractions can be coupled together. TensorFlow is supported on Python versions 3.7 through 3.10, and while it may work on earlier versions of Python it’s not guaranteed to do so.

Nodes and tensors in TensorFlow are Python objects, and TensorFlow applications are themselves Python applications. The actual math operations, however, are not performed in Python. The libraries of transformations that are available through TensorFlow are written as high-performance C++ binaries. Python just directs traffic between the pieces and provides high-level programming abstractions to hook them together.

High-level work in TensorFlow—creating nodes and layers and linking them together—uses the Keras library. The Keras API is outwardly simple; a basic model with three layers can be defined in less than 10 lines of code, and the training code for the same takes just a few more lines of code. But if you want to “lift the hood” and do more fine-grained work, such as writing your own training loop, you can do that.

Using TensorFlow with JavaScript

Python is the most popular language for working with TensorFlow and machine learning generally. But JavaScript is now also a first-class language for TensorFlow, and one of JavaScript’s massive advantages is that it runs anywhere there’s a web browser.

TensorFlow.js, as the JavaScript TensorFlow library is called, uses the WebGL API to accelerate computations by way of whatever GPUs are available in the system. It’s also possible to use a WebAssembly back end for execution, and it’s faster than the regular JavaScript back end if you’re only running on a CPU, though it’s best to use GPUs whenever possible. Pre-built models let you get up and running with simple projects to give you an idea of how things work.

TensorFlow Lite

Trained TensorFlow models can also be deployed on edge computing or mobile devices, such as iOS or Android systems. The TensorFlow Lite toolset optimizes TensorFlow models to run well on such devices, by allowing you to making tradeoffs between model size and accuracy. A smaller model (that is, 12MB versus 25MB, or even 100+MB) is less accurate, but the loss in accuracy is generally small, and more than offset by the model’s speed and energy efficiency.

Why use TensorFlow

The single biggest benefit TensorFlow provides for machine learning development is abstraction. Instead of dealing with the nitty-gritty details of implementing algorithms, or figuring out proper ways to hitch the output of one function to the input of another, the developer can focus on the overall application logic. TensorFlow takes care of the details behind the scenes.

TensorFlow offers additional conveniences for developers who need to debug and gain introspection into TensorFlow apps. Each graph operation can be evaluated and modified separately and transparently, instead of constructing the entire graph as a single opaque object and evaluating it all at once. This so-called “eager execution mode,” provided as an option in older versions of TensorFlow, is now standard.

The TensorBoard visualization suite lets you inspect and profile the way graphs run by way of an interactive, web-based dashboard. A service, Tensorboard.dev (hosted by Google), lets you host and share machine learning experiments written in TensorFlow. It’s free to use with storage for up to 100M scalars, 1GB of tensor data, and 1GB of binary object data. (Note that any data hosted in Tensorboard.dev is public, so don’t use it for sensitive projects.)

TensorFlow also gains many advantages from the backing of an A-list commercial outfit in Google. Google has fueled the rapid pace of development behind the project and created many significant offerings that make TensorFlow easier to deploy and use. The above-mentioned TPU silicon for accelerated performance in Google’s cloud is just one example.

Deterministic model training with TensorFlow 

A few details of TensorFlow’s implementation make it hard to obtain totally deterministic model-training results for some training jobs. Sometimes, a model trained on one system will vary slightly from a model trained on another, even when they are fed the exact same data. The reasons for this variance are slippery—one reason is how random numbers are seeded and where; another is related to certain non-deterministic behaviors when using GPUs. TensorFlow’s 2.0 branch has an option to enable determinism across an entire workflow with a couple of lines of code. This feature comes at a performance cost, however, and should only be used when debugging a workflow.

TensorFlow vs. PyTorch, CNTK, and MXNet

TensorFlow competes with a slew of other machine learning frameworks. PyTorch, CNTK, and MXNet are three major frameworks that address many of the same needs. Let’s close with a quick look at where they stand out and come up short against TensorFlow:

  • PyTorch is built with Python and has many other similarities to TensorFlow: hardware-accelerated components under the hood, a highly interactive development model that allows for design-as-you-go work, and many useful components already included. PyTorch is generally a better choice for fast development of projects that need to be up and running in a short time, but TensorFlow wins out for larger projects and more complex workflows.
  • CNTK, the Microsoft Cognitive Toolkit, is like TensorFlow in using a graph structure to describe dataflow, but it focuses mostly on creating deep learning neural networks. CNTK handles many neural network jobs faster, and has a broader set of APIs (Python, C++, C#, Java). But it isn’t currently as easy to learn or deploy as TensorFlow. It’s also only available under the GNU GPL 3.0 license, whereas TensorFlow is available under the more liberal Apache license. And CNTK isn’t as aggressively developed; the last major release was in 2019.
  • Apache MXNet, adopted by Amazon as the premier deep learning framework on AWS, can scale almost linearly across multiple GPUs and multiple machines. MXNet also supports a broad range of language APIs—Python, C++, Scala, R, JavaScript, Julia, Perl, Go—although its native APIs aren’t as pleasant to work with as TensorFlow’s. It also has a far smaller community of users and developers.