Posted on Leave a comment

Some cloud-based AI systems are returning to on-premises data centers

Some cloud-based AI systems are returning to on-premises data centers

AI/ML model training and knowledge-based storage and processing are more costly on a cloud than many thought, and prices for compute and storage equipment have fallen.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by: David Linthicum

As a concept, artificial intelligence is very old. My first job out of college almost 40 years ago was as an AI systems developer using Lisp. Many of the concepts from back then are still in use today. However, it’s about a thousand times less expensive now to build, deploy, and operate AI systems for any number of business purposes.

Cloud computing revolutionized AI and machine learning, not because the hyperscalers invented it but because they made it affordable. Nevertheless, I and some others are seeing a shift in thinking about where to host AI/ML processing and AI/ML-coupled data. Using the public cloud providers was pretty much a no-brainer for the past few years. These days, the valuing of hosting AI/ML and the needed data on public cloud providers is being called into question. Why?

Cost of course. Many businesses have built game-changing AI/ML systems in the cloud, and when they get the cloud bills at the end of the month, they understand quickly that hosting AI/ML systems, including terabytes or petabytes of data, is pricey. Moreover, data egress and ingress costs (what you pay to send data from your cloud provider to your data center or another cloud provider) will run up that bill significantly.

Companies are looking at other, more cost-effective options, including managed service providers and co-location providers (colos), or even moving those systems to the old server room down the hall. This last group is returning to “owned platforms” largely for two reasons.

First, the cost of traditional compute and storage equipment has fallen a great deal in the past five years or so. If you’ve never used anything but cloud-based systems, let me explain. We used to go into rooms called data centers where we could physically touch our computing equipment—equipment that we had to purchase outright before we could use it. I’m only half kidding.

When it comes down to renting versus buying, many are finding that traditional approaches, including the burden of maintaining your own hardware and software, are actually much cheaper than the ever-increasing cloud bills.

Second, many are experiencing some latency with cloud. The slowdowns happen because most enterprises consume cloud-based systems over the open internet, and the multitenancy model means that you’re sharing processors and storage systems with many others at the same time. Occasional latency can translate into many thousands of dollars of lost revenue a year, depending on what you’re doing with your specific cloud-based AI/ML system in the cloud.

Many of the AI/ML systems that are available from cloud providers are also available on traditional systems. Migrating from a cloud provider to a local server is cheaper and faster, and more akin to a lift-and-shift process, if you’re not locked into an AI/ML system that only runs on a single cloud provider.

What’s the bottom line here? Cloud computing will continue to grow. Traditional computing systems whose hardware we own and maintain, not as much. This trend won’t slow down. However, some systems, especially AI/ML systems that use a large amount of data and processing and happen to be latency sensitive, won’t be as cost-effective in the cloud. This could also be the case for some larger analytical applications such as data lakes and data lake houses.

Some could save half the yearly cost of hosting on a public cloud provider by repatriating the AI/ML system back on-premises. That business case is just too compelling to ignore, and many won’t.

Cloud computing prices may lower to accommodate these workloads that are cost-prohibitive to run on public cloud providers. Indeed, many workloads may not be built there in the first place, which is what I suspect is happening now. It is no longer always a no-brainer to leverage cloud for AI/ML.

Posted on Leave a comment

What is behavioral analytics and when is it important?

What is behavioral analytics and when is it important?

The ability to mine large amounts of data to study how users act offers long-reaching business benefits and risk reduction opportunities.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by:

You’re shopping for a car. You visit a manufacturer’s website to learn about model trims, review deals listed on the local dealer’s website, and then visit the dealership. What information can the sales rep review to learn about your purchasing needs and determine the best options to offer you?

The security operations center receives an alert about an employee’s activities on the network. Is the employee learning about different business areas and just working at unexpected hours from a remote location? Or is this malicious behavior and the SOC should take action?

These are examples of insights that user behavior analytics can provide. Common use cases include increasing business-to-business and business-to-consumer sales, improving customer experience, detecting anomalies, alerting on risks, and leveraging data from Internet of Things devices to identify dangerous conditions.

Rosaria Silipo, principal data scientist and head of evangelism at KNIME, offers this simple definition of behavioral analytics. She says, “Behavioral analytics studies people’s reactions and behavior patterns in particular situations.”

Business opportunities in behavioral analytics

Behavioral analytics is particularly important any time a product or service has many people doing numerous things where there are both opportunities to improve outcomes and to reduce risks. Examples include customer buying habits on large-scale e-commerce websites, healthcare applications, gaming platforms, and wealth management in banking. Silipo explains further, “The goal is to study the mass of people, and the key is the availability of large amounts of data.“

Kathy Brunner, CEO of Acumen Analytics, refers to research that the global behavior analytics market is projected to reach $2.2 billion by 2026, from $427.3 million, at a compound annual growth rate of 32% from 2022 to 2026.

Brunner shares these insights on the business opportunities. “The current focus is mainly retail, and why not? Where I see real transformation is in combining this capability with AI/ML, other advanced modeling technologies, and real-world evidence in healthcare data. Imagine the impact from figuring out how best to get patients into clinical trials, improving drug discovery, and advancing patient outcomes with precision and personalized medicine.”

So although behavioral analytics can be an issue if an implementation violates privacy norms or compliance regulations, it can also lead to very positive outcomes for consumers and businesses.

Mitigating risks with behavioral analytics

Behavioral analytics is often used for business opportunities, but the techniques are just as applicable to identify and alert on risks. Behavioral analytics is used in banking for fraud detection, embedded in AIops tools to help improve incident management, and helps gaming systems to identify cheaters.

Large enterprises with many global employees, contractors, and suppliers also leverage behavioral analytics to spot suspicious activities. Isaac Kohen, vice president of research and development at Teramind, says, “User and entity behavior analytics can identify and alert the organization to a wide range of anomalous behaviors. Potential threats can be malicious, inadvertent, or compromised activities by an employee, user, or third-party entity. It is used in many industries to prevent insider threats and analyze user behaviors for compliance and regulatory requirements.”

The data science behind behavioral analytics is often applied to people, customers, and users, but it can also be applied to the entities operating in large-scale systems.

Todd Mostak, CTO and cofounder of Heavy.AI, shares this wider perspective: “Behavioral analytics is a data-driven approach to tracking, predicting, and leveraging behavior data to make informed decisions. With the shipping delays and supply chain shortages today, behavioral analytics technology can monitor the activity of billions of ships, examine ports, and study global shipping patterns to help experts solve these issues.”

The data science behind behavioral analytics

Behavior analytics is a broad application of data science, machine learning, and AI techniques. Scott Toborg, head of data science and analytics products at Teradata, explains the underlying data science. “Behavioral analytics leverages customer data about who they are (demographics), what they are doing (events), and who they interact with (connections) to derive better insights, predictions, and actions. The process consists of segmentation, predictive modeling, and prescriptive action.”

Toborg suggests that behavioral analytics shares many of the same opportunities data science targets but also faces challenges in developing and supporting machine learning models. He continues, “When properly implemented, behavioral analytics results in better customer experiences, more precise targeted marketing, and greater engagement. However, there are challenges, including privacy, model bias, and stereotyping.”

Useful techniques and technologies

Behavioral analytics is a set of operations, data, and technology practices targeted at specific business opportunities or aimed to mitigate a set of quantifiable risks. There are many ways organizations can implement behavior analytics. The list below is a subset of the available solutions.

  • Platforms such as content management, e-commerce, and digital experience often include capabilities to support behavioral analytics.
  • Customer data platforms centralize data on customers and their actions while providing integrations to perform actions on marketing automation platforms, advertising systems, e-commerce, and other platforms.
  • Product analytics and digital experience analytics platforms such as Adobe Analytics, Amplitude, Contentsquare, FullStory, Glassbox, Heap, Mixpanel, and Userpilot aggregate usage metrics and provide analytics capabilities.
  • Media, e-commerce, and other content-rich websites should consider intelligent search platforms that include behavioral analytics, recommendation engines, and personalization capabilities.
  • Techniques to experiment and learn from user responses include A/B testing, user activity recordings, activity measurement tools, and customer feedback measurement practices. These aim to optimize activities based on customer segments and personas.
  • Application developers can use feature flagging to support large-scale A/B feature testing while implementing microservice observability to identify malicious API activities.
  • Organizations can also consider data analytics, analytics process automation, or machine learning platforms to centralize behavioral data and create behavioral analytics capabilities. Some data platforms include Alteryx, Dataiku, Databricks, DataRobot, Informatica, KNIME, RapidMiner, SAS, Tableau, Talend, and many others.
  • IoT, wearable, augmented reality/virtual reality, voice-enabled devices, and cameras with computer vision capabilities all represent new inputs and data sources for capturing behavioral data.

There’s little doubt that more organizations will consider using behavioral analytics capabilities to grow revenue, improve experiences, and reduce risks.

Posted on Leave a comment

How AI is changing IoT

How AI is changing IoT

Artificial intelligence unlocks the true potential of IoT by enabling networks and devices to learn from past decisions, predict future activity, and continuously improve performance and decision-making capabilities.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by: Xavier Dupont

IoT has seen steady adopted across the business world over the past decade. Businesses have been built or optimized using IoT devices and their data capabilities, ushering in a new era of business and consumer technology. Now the next wave is upon us as advances in AI and machine learning unleash the possibilities of IoT devices utilizing “artificial intelligence of things,” or AIoT.

Consumers, businesses, economies, and industries that adopt and invest in AIoT can leverage its power and gain competitive advantages. IoT collects the data, and AI analyzes it to simulate smart behavior and support decision-making processes with minimal human intervention.

Why IoT needs AI

IoT allows devices to communicate with each other and act on those insights. These devices are only as good as the data they provide. To be useful for decision-making, the data needs to be collected, stored, processed, and analyzed.

This creates a challenge for organizations. As IoT adoption increases, businesses are struggling to process the data efficiently and use it for real-world decision making and insights.

This is due to two problems: the cloud and data transport. The cloud can’t scale proportionately to handle all the data that comes from IoT devices, and transporting data from the IoT devices to the cloud is bandwidth-limited. No matter the size and sophistication of the communications network, the sheer volume of data collected by IoT devices leads to latency and congestion.

Several IoT applications rely on rapid, real-time decision-making such as autonomous cars. To be effective and safe, autonomous cars need to process data and make instantaneous decisions (just like a human being). They can’t be limited by latency, unreliable connectivity, and low bandwidth.

Autonomous cars are far from the only IoT applications that rely on this rapid decision making. Manufacturing already incorporates IoT devices, and delays or latency could impact the processes or limit capabilities in the event of an emergency.

In security, biometrics are often used to restrict or allow access to specific areas. Without rapid data processing, there could be delays that impact speed and performance, not to mention the risks in emergent situations. These applications require ultra-low latency and high security. Hence the processing must be done at the edge. Transferring data to the cloud and back simply isn’t viable. 

Benefits of AIoT

Every day, IoT devices generate around one billion gigabytes of data. By 2025, the projection for IoT-connected devices globally is 42 billion. As the networks grow, the data does too.

As demands and expectations change, IoT is not enough. Data is increasing, creating more challenges than opportunities. The obstacles are limiting the insights and possibilities of all that data, but intelligent devices can change that and allow organizations to unlock the true potential of their organizational data.

With AI, IoT networks and devices can learn from past decisions, predict future activity, and continuously improve performance and decision-making capabilities. AI allows the devices to “think for themselves,” interpreting data and making real-time decisions without the delays and congestion that occur from data transfers.

AIoT has a wide range of benefits for organizations and offers a powerful solution to intelligent automation.  

Avoiding downtime

Some industries are hampered by downtime, such as the offshore oil and gas industry. Unexpected equipment breakdown can cost a fortune in downtime. To prevent that, AIoT can predict equipment failures in advance and schedule maintenance before the equipment experiences severe issues.

Increasing operational efficiency

AI processes the huge volumes of data coming into IoT devices and detects underlying patterns much more efficiently than humans can. AI with machine learning can enhance this capability by predicting the operational conditions and modifications necessary for improved outcomes.

Enabling new and improved products and services

Natural language processing is constantly improving, allowing devices and humans to communicate more effectively. AIoT can enhance new or existing products and services by allowing for better data processing and analytics.

Improved risk management

Risk management is necessary to adapt to a rapidly changing market landscape. AI with IoT can use data to predict risks and prioritize the ideal response, improving employee safety, mitigating cyber threats, and minimizing financial losses.

Key industrial applications for AIoT

AIoT is already revolutionizing many industries, including manufacturing, automotive, and retail. Here are some common applications for AIoT in different industries.

Manufacturing

Manufacturers have been leveraging IoT for equipment monitoring. Taking it a step further, AIoT combines the data insights from IoT devices with AI capabilities to offer predictive analysis. With AIoT, manufacturers can take a proactive role with warehouse inventory, maintenance, and production.

Robotics in manufacturing can significantly improve operations. Robots are enabled with implanted sensors for data transmission and AI, so they can continually learn from data and save time and reduce costs in the manufacturing process.

Sales and marketing

Retail analytics takes data points from cameras and sensors to track customer movements and predict their behaviors in a physical store, such as the time it takes to reach the checkout line. This can be used to suggest staffing levels and make cashiers more productive, improving overall customer satisfaction.

Major retailers can use AIoT solutions to grow sales through customer insights. Data such as mobile-based user behavior and proximity detection offer valuable insights to deliver personalized marketing campaigns to customers while they shop, increasing traffic in brick-and-mortar locations.

Automotive

AIoT has numerous applications in the automotive industry, including maintenance and recalls. AIoT can predict failing or defective parts, and can combine the data from recalls, warranties, and safety agencies to see which parts may need to be replaced and provide service checks to customers. Vehicles end up with a better reputation for reliability, and the manufacturer gains customer trust and loyalty.

One of the best-known, and possibly most exciting, applications for AIoT is autonomous vehicles. With AI enabling intelligence to IoT, autonomous vehicles can predict driver and pedestrian behavior in a multitude of circumstances to make driving safer and more efficient.

Healthcare

One of the prevailing goals of quality healthcare is extending it to all communities. Regardless of the size and sophistication of healthcare systems, physicians are under increasing time and workload pressures and spending less time with patients. The challenge to deliver high-quality healthcare against administrative burdens is intense. 

Healthcare facilities also produce vast amounts of data and record high volumes of patient information, including imaging and test results. This information is valuable and necessary to quality patient care, but only if healthcare facilities can access it quickly to inform diagnostic and treatment decisions.

IoT combined with AI has numerous benefits for these hurdles, including improving diagnostic accuracy, enabling telemedicine and remote patient care, and reducing the administrative burden of tracking patient health in the facility. And perhaps most importantly, AIoT can identify critical patients faster than humans by processing patient information, ensuring that patients are triaged effectively.

Prepare for the future with AIoT

AI and IoT is the perfect marriage of capabilities. AI enhances IoT through smart decision making, and IoT facilitates AI capability through data exchange. Ultimately, the two combined will pave the way to a new era of solutions and experiences that transform businesses across numerous industries, creating new opportunities altogether. 

Posted on Leave a comment

What is TensorFlow? The machine learning library explained

What is TensorFlow? The machine learning library explained

TensorFlow is a Python-friendly open source library for numerical computation that makes machine learning and developing neural networks faster and easier.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by:

Machine learning is a complex discipline but implementing machine learning models is far less daunting than it used to be, thanks to machine learning frameworks—such as Google’s TensorFlow—that ease the process of acquiring data, training models, serving predictions, and refining future results.

Created by the Google Brain team and initially released to the public in 2015, TensorFlow is an open source library for numerical computation and large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning models and algorithms (aka neural networks) and makes them useful by way of common programmatic metaphors. It uses Python or JavaScript to provide a convenient front-end API for building applications, while executing those applications in high-performance C++.

TensorFlow, which competes with frameworks such as PyTorch and Apache MXNet, can train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, sequence-to-sequence models for machine translation, natural language processing, and PDE (partial differential equation)-based simulations. Best of all, TensorFlow supports production prediction at scale, with the same models used for training.

TensorFlow also has a broad library of pre-trained models that can be used in your own projects. You can also use code from the TensorFlow Model Garden as examples of best practices for training your own models.

How TensorFlow works

TensorFlow allows developers to create dataflow graphs—structures that describe how data moves through a graph, or a series of processing nodes. Each node in the graph represents a mathematical operation, and each connection or edge between nodes is a multidimensional data array, or tensor.

TensorFlow applications can be run on most any target that’s convenient: a local machine, a cluster in the cloud, iOS and Android devices, CPUs or GPUs. If you use Google’s own cloud, you can run TensorFlow on Google’s custom TensorFlow Processing Unit (TPU) silicon for further acceleration. The resulting models created by TensorFlow, though, can be deployed on most any device where they will be used to serve predictions.

TensorFlow 2.0, released in October 2019, revamped the framework in many ways based on user feedback, to make it easier to work with (as an example, by using the relatively simple Keras API for model training) and more performant. Distributed training is easier to run thanks to a new API, and support for TensorFlow Lite makes it possible to deploy models on a greater variety of platforms. However, code written for earlier versions of TensorFlow must be rewritten—sometimes only slightly, sometimes significantly—to take maximum advantage of new TensorFlow 2.0 features.

A trained model can be used to deliver predictions as a service via a Docker container using REST or gRPC APIs. For more advanced serving scenarios, you can use Kubernetes

Using TensorFlow with Python

TensorFlow provides all of this for the programmer by way of the Python language. Python is easy to learn and work with, and it provides convenient ways to express how high-level abstractions can be coupled together. TensorFlow is supported on Python versions 3.7 through 3.10, and while it may work on earlier versions of Python it’s not guaranteed to do so.

Nodes and tensors in TensorFlow are Python objects, and TensorFlow applications are themselves Python applications. The actual math operations, however, are not performed in Python. The libraries of transformations that are available through TensorFlow are written as high-performance C++ binaries. Python just directs traffic between the pieces and provides high-level programming abstractions to hook them together.

High-level work in TensorFlow—creating nodes and layers and linking them together—uses the Keras library. The Keras API is outwardly simple; a basic model with three layers can be defined in less than 10 lines of code, and the training code for the same takes just a few more lines of code. But if you want to “lift the hood” and do more fine-grained work, such as writing your own training loop, you can do that.

Using TensorFlow with JavaScript

Python is the most popular language for working with TensorFlow and machine learning generally. But JavaScript is now also a first-class language for TensorFlow, and one of JavaScript’s massive advantages is that it runs anywhere there’s a web browser.

TensorFlow.js, as the JavaScript TensorFlow library is called, uses the WebGL API to accelerate computations by way of whatever GPUs are available in the system. It’s also possible to use a WebAssembly back end for execution, and it’s faster than the regular JavaScript back end if you’re only running on a CPU, though it’s best to use GPUs whenever possible. Pre-built models let you get up and running with simple projects to give you an idea of how things work.

TensorFlow Lite

Trained TensorFlow models can also be deployed on edge computing or mobile devices, such as iOS or Android systems. The TensorFlow Lite toolset optimizes TensorFlow models to run well on such devices, by allowing you to making tradeoffs between model size and accuracy. A smaller model (that is, 12MB versus 25MB, or even 100+MB) is less accurate, but the loss in accuracy is generally small, and more than offset by the model’s speed and energy efficiency.

Why use TensorFlow

The single biggest benefit TensorFlow provides for machine learning development is abstraction. Instead of dealing with the nitty-gritty details of implementing algorithms, or figuring out proper ways to hitch the output of one function to the input of another, the developer can focus on the overall application logic. TensorFlow takes care of the details behind the scenes.

TensorFlow offers additional conveniences for developers who need to debug and gain introspection into TensorFlow apps. Each graph operation can be evaluated and modified separately and transparently, instead of constructing the entire graph as a single opaque object and evaluating it all at once. This so-called “eager execution mode,” provided as an option in older versions of TensorFlow, is now standard.

The TensorBoard visualization suite lets you inspect and profile the way graphs run by way of an interactive, web-based dashboard. A service, Tensorboard.dev (hosted by Google), lets you host and share machine learning experiments written in TensorFlow. It’s free to use with storage for up to 100M scalars, 1GB of tensor data, and 1GB of binary object data. (Note that any data hosted in Tensorboard.dev is public, so don’t use it for sensitive projects.)

TensorFlow also gains many advantages from the backing of an A-list commercial outfit in Google. Google has fueled the rapid pace of development behind the project and created many significant offerings that make TensorFlow easier to deploy and use. The above-mentioned TPU silicon for accelerated performance in Google’s cloud is just one example.

Deterministic model training with TensorFlow 

A few details of TensorFlow’s implementation make it hard to obtain totally deterministic model-training results for some training jobs. Sometimes, a model trained on one system will vary slightly from a model trained on another, even when they are fed the exact same data. The reasons for this variance are slippery—one reason is how random numbers are seeded and where; another is related to certain non-deterministic behaviors when using GPUs. TensorFlow’s 2.0 branch has an option to enable determinism across an entire workflow with a couple of lines of code. This feature comes at a performance cost, however, and should only be used when debugging a workflow.

TensorFlow vs. PyTorch, CNTK, and MXNet

TensorFlow competes with a slew of other machine learning frameworks. PyTorch, CNTK, and MXNet are three major frameworks that address many of the same needs. Let’s close with a quick look at where they stand out and come up short against TensorFlow:

  • PyTorch is built with Python and has many other similarities to TensorFlow: hardware-accelerated components under the hood, a highly interactive development model that allows for design-as-you-go work, and many useful components already included. PyTorch is generally a better choice for fast development of projects that need to be up and running in a short time, but TensorFlow wins out for larger projects and more complex workflows.
  • CNTK, the Microsoft Cognitive Toolkit, is like TensorFlow in using a graph structure to describe dataflow, but it focuses mostly on creating deep learning neural networks. CNTK handles many neural network jobs faster, and has a broader set of APIs (Python, C++, C#, Java). But it isn’t currently as easy to learn or deploy as TensorFlow. It’s also only available under the GNU GPL 3.0 license, whereas TensorFlow is available under the more liberal Apache license. And CNTK isn’t as aggressively developed; the last major release was in 2019.
  • Apache MXNet, adopted by Amazon as the premier deep learning framework on AWS, can scale almost linearly across multiple GPUs and multiple machines. MXNet also supports a broad range of language APIs—Python, C++, Scala, R, JavaScript, Julia, Perl, Go—although its native APIs aren’t as pleasant to work with as TensorFlow’s. It also has a far smaller community of users and developers.
Posted on Leave a comment

Why you should modernize search technologies

Why you should modernize search technologies

Important knowledge is scattered throughout the organization. Simplify everything, make it easy for employees to find what they need, and use machine learning.

Please Note: this article has been kindly reproduced from the site: infoworld.com 

Written by:

I earned much of my software development skills in architecting, building, and supporting customer-facing search applications. I used many different search technologies over the years, and they all had similar development patterns. You had to set up the infrastructure, load data, configure search indexes, and develop search experiences.

The work to load the data, configure search algorithms, and develop apps was just the beginning. Tuning relevancy was a tug of war between stakeholders with different views and requirements on the heuristics. Each new rule often required revisiting how content was tagged, enriched, or indexed. We had additional work to scale the infrastructure, add new data sources, and reconfigure search interfaces to support growth and new user personas.

Much has changed and improved since those first-generation search technologies, and today’s modernized search platforms make it easier to build the infrastructure, integrate with content sources, and improve relevancy. There’s also a strong business case to modernize search platforms to improve customer and employee support.

Yet, I find many development and data science teams focus most of their data efforts on the dataops, machine learning, and data visualizations on structured data sources. Searching unstructured data, such as business documents, websites, XML repositories, or other textual data fields often takes a back seat because of the added tech and skills needed to search them well.

For this post, I consulted with three experts on why IT, digital experience, and data teams should consider modernizing their search technologies.

Simplify experiences, dev tools, and system administration

Mark Floisand, senior vice president of product and marketing at Coveo, shares one of the problems with legacy search implementations that can be more easily solved today. “Enterprise search technology has typically been bought or built within departments, siloed and only with individual departmental goals in mind. Instead, you can deliver enterprise search, website search, and in-app search using a single, unified platform,” he says.

Centralizing on a single platform to provide a common user experience, developer tools, and administrative capabilities can impact several departments. Floisand continues, “Unifying search dramatically simplifies IT’s management and internal support burden. IT can support all internal departments’ requests with the right platform, whether teams are focused on customer acquisition, conversion, and retention or on helping other employees be more proficient.”

One way development teams can support multiple search experiences is with headless search, especially when the workflow and user experience require personalization. Developers can then use lighter-weight low-code and no-code interfaces to embed search into customer support and employee workflow platforms.

Improve employee experiences to support hybrid work models

The search capabilities bundled with enterprise portals can be sufficient for smaller companies, especially if they have less frequent communications and fewer tools to integrate. But for larger companies with multiple departments and many information sources, centralizing information from multiple content management systems, customer relationship management systems, and other software-as-a-service tools leads to an information-rich experience.

A comprehensive search experience should be a primary tool for employees to find documentation, subject matter experts, and information generated in workflow tools. This capability is critical for teams in a hybrid work model, and it’s one step in creating a virtual water cooler. It can help employee productivity and reduce the stress of finding the key information for their objectives.

Arvind Jain, CEO of Glean, agrees. “Finding what you need at work is complicated, especially as companies grow, as knowledge becomes fragmented across an array of apps and people.”

Of course, building a personalized, relevant, and up-to-date search experience wasn’t trivial before we had cloud, SaaS with APIs, integration platforms, and machine learning. Poor data quality creates a poor search experience that employees must work around.

Jain says, “Building a great enterprise search experience requires solving previously insurmountable challenges, like deeply understanding how employees work and what information matters to them. Advances in technology have helped unlock radically better solutions that allow advanced relevance models to be built without the need for constant manual tweaks.”

Expand search across more content sources

Eudald Camprubí, CEO of Nuclia, highlights search engine capabilities that can expand a company’s scope and scale. He says, “Between 80% to 90% of any company’s data is unstructured. Data lies in different data sources and is in different formats and languages. Ingesting, processing, and indexing this data is among the biggest challenges in search today. Only AI-powered search engines for unstructured data will help enterprises overcome this chaos.”

Search engines with built-in and configurable machine learning algorithms provide significant advantages for companies with multiple apps and user personas searching large information repositories. Search platforms complete on the quality and scale of their machine learning capabilities, including algorithms for entity enrichment, automatic relevance tuning, and recommendation engines.

Why prioritize search platforms?

Here are five more considerations of why organizations should modernize search platforms and experiences:

  • Modern platforms go beyond keyword interfaces and simplify the user experience with natural language querying.
  • Businesses supporting multiple search technologies should be able to find cost savings by consolidating to a single enterprise search platform.
  • Devops teams can reduce technical debt by consolidating to one platform, developing a service layer, and converting proprietary integrations to the search platform’s out-of-the-box ones.
  • Upgrading apps by modernizing search experiences has the potential to improve performance, support mobile interfaces, address accessibility, and personalize the experience.
  • Search engines with APIs can be a back-end repository to data science, analytics, and data visualization tools, effectively presenting unstructured data as a structured data source.

If your devops teams support legacy search indexes, it may be time to dust off the cobwebs and consider upgrading. Modernized platforms offer significant benefits to businesses, users, data science teams, and technology organizations.

Posted on Leave a comment

Google’s former head says AI is as dangerous as nuclear weapons

Google’s former head says AI is as dangerous as nuclear weapons

Eric Schmidt said that he was ‘naive about the impact of what we were doing’ but that ‘arming’ AI could ‘trigger the other side’

Please Note: this article has been kindly reproduced from the site: independent.co.uk 

Written by: Adam Smith

Google’s former chief executive Eric Schmidt has called artificial intelligence as dangerous as nuclear weapons.

Speaking at the Aspen Security Forum earlier this week, Eric Schmidt said that he was “naive about the impact of what we were doing”, but that information is “incredibly powerful” and “government and other institutions should put more pressure on tech to put these things consistent with our values.”

“The leverage that tech has is very, very real. If you think about, how will we negotiate an AI agreement? First you have to have technologists that understand what’s going to happen, and then you have awareness on the other side.

“Let’s say we want to have a chat with China on some kind of treaty around AI surprises. Very reasonable. How would we do it? Who in the US government would work with us? And it’s even worse on the Chinese side? Who do we call? … we’re not ready for the negotiations we need.

“In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned It’s an example of a balance of trust, or lack of trust, it’s a ‘no surprises’ rule.

“I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing…will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum … because you’re arming or getting ready, you then trigger the other side.”

The capabilities of artificial intelligence have been stated – and overstated – numerous times over the years. Tesla chief executive Elon Musk has often said that AI is highly likely to be a threat to humans, and recently Google fired a software engineer who claimed its artificial intelligence had become self-aware and sentient.

However, experts have often reminded people that the issue of AI is what it is trained for and how it is used by humans. If the algorithms that train these systems are based on flawed, racist, or sexist data, then the results will reflect that.

Posted on Leave a comment

Microsoft retires controversial AI that can guess your emotions

Microsoft retires controversial AI that can guess your emotions

Tech giant warns that ‘new guardrails’ are required for artificial intelligence

Please Note: this article has been kindly reproduced from the site: independent.co.uk 

Written by: Anthony Cuthbertson

Microsoft has announced that it will halt sales of an artificial intelligence service that can predict a person’s age, gender and even emotions.

The tech giant cited ethical concerns surrounding the facial recognition technology, which it claimed could subject people to “stereotyping, discrimination, or unfair denial of services”.

In a blog post published on Tuesday, Microsoft outlined the measures it would take to ensure its Face API is developed and used responsibly.

“To mitigate these riskes, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup,” wrote Sarah Bird, a product manager at Microsoft’s Azure AI.

“Detection of these attributes will no longer be available to new customers beginning 21 June, 2022, and existing customers have until 30 June, 2023, to discontinue use of these attributes before they are retired.”

Microsoft’s Face API was used by companies like Uber to verify that the driver using the app matches the account on file, however unionised drivers in the UK called for it to be removed after it failed to recognise legitimate drivers.

The technology also raised fears about potential misuse in other settings, such as firms using it to monitor applicants during job interviews.

Despite retiring the product for customers, Microsoft will continue to use the controversial technology within at least one of its products. An app for people with visual impairments called Seeing AI will still make use of the machine vision capabilities.

Microsoft also announced that it would be making updates to its ‘Responsible AI Standard’ – an internal playbook that guides its development of AI products – in order to mitigate the “socio-technical risks” posed by the technology.

It involved consultations with researchers, engineers, policy experts and anthropologists to help understand which safeguards can help prevent discrimination.

Posted on Leave a comment

Big Tech could be forced to reveal their AI Algorithms

Big tech could be forced to reveal their algorithms

Please Note: this article has been kindly reproduced from artificialintelligence-news.com 

by: Ryan Daws – senior editor at TechForge Media

A landmark case in Japan could force tech giants to reveal how their algorithms work.

Last month, a Tokyo court ruled in favour of Hanryumura – a BBQ restaurant chain operator – in an antitrust case brought against Kakaku.com, operator of Tabelog, Japan’s largest restaurant review platform.

Hanryumura claimed that Kakaku altered the way user scores were tallied in a way that hurt sales at its restaurants. The restaurant operator received $284,000 in damages but that’s not what’s most interesting about the case.

In an unprecedented move, the court asked Kakaka to reveal part of its algorithms. There’s never been an antitrust case anywhere in the world where a digital platform has been forced to disclose its algorithm.

While Hanryumura is banned from disclosing the information it was shown about Kakaka’s algorithms, tech firms have long argued their algorithms should always be classed as trade secrets and never be revealed.

Now that a precedent has been set, it’s likely that similar cases will follow.

The case also supports calls for regulators to force companies, especially big tech firms, to be more transparent about how their algorithms work—especially where they make critical decisions about people’s lives.

 

Posted on Leave a comment

Why We Talk About Computers Having Brains (and Why the Metaphor Is All Wrong)

Why We Talk About Computers Having Brains (and Why the Metaphor Is All Wrong)

Please Note: this article has been kindly reproduced from theconversation.com 

by: Tomas Fitzgerald – Lecturer in Law, Curtin University

It is a truth, universally acknowledged, that the machines are taking over. What is less clear is whether the machines know that. Recent claims by a Google engineer that the LaMBDA AI Chatbot might be conscious made international headlines and sent philosophers into a tizz. Neuroscientists and linguists were less enthused.

As AI makes greater gains, debate about the technology moves from the hypothetical to the concrete and from the future to the present. This means a broader cross-section of people – not just philosophers, linguists and computer scientists but also policy-makers, politicians, judges, lawyers and law academics – need to form a more sophisticated view of AI.

After all, how policy-makers talk about AI is already shaping decisions about how to regulate that technology.

Take, for example, the case of Thaler v Commissioner of Patents, which was launched in the Federal Court of Australia after the commissioner for patents rejected an application naming an AI as an inventor. When Justice Beech disagreed and allowed the application, he made two findings.

First, he found that the word “inventor” simply described a function and could be performed either by a human or a thing. Think of the word “dishwasher”: it might describe a person, a kitchen appliance, or even an enthusiastic dog.

Second, Justice Beech used the metaphor of the brain to explain what AI is and how it works. Reasoning by analogy with human neurons, he found that the AI system in question could be considered autonomous, and so might meet the requirements of an inventor.

The case raises an important question: where did the idea that AI is like a brain come from? And why is it so popular?

AI for the mathematically challenged

It is understandable that people with no technical training might rely on metaphors to understand complex technology. But we would hope that policy-makers might develop a slightly more sophisticated understanding of AI than the one we get from Robocop.

My research considered how law academics talk about AI. One significant challenge for this group is that they are frequently maths-phobic. As the legal scholar Richard Posner argues, the law

provides a refuge for bright youngsters who have “math block”, though this usually means they shied away from math and science courses because they could get higher grades with less work in verbal fields.

Following Posner’s insight I reviewed all uses of the term “neural network” – the usual label for a common kind of AI system – published in a set of Australian law journals between 2015 and 2021.

Most papers made some attempt to explain what a neural network was. But only three of the nearly 50 papers attempted to engage with the underlying mathematics beyond a broad reference to statistics. Only two papers used visual aids to assist in their explanation, and none at all made use of the computer code or mathematical formulas central to neural networks.

By contrast, two-thirds of the explanations referred to the “mind” or biological neurons. And the overwhelming majority of those made a direct analogy. That is, they suggested AI systems actually replicated the function of human minds or brains. The metaphor of the mind is clearly more attractive than engaging with the underlying maths.

It is little wonder, then, that our policy-makers and judges – like the general public – make such heavy use of these metaphors. But the metaphors are leading them astray.

Where did the idea that AI is like the brain come from?

Understanding what produces intelligence is an ancient philosophical problem that was ultimately taken up by the science of psychology. An influential statement of the problem was made in William James’ 1890 book Principles of Psychology, which set early scientific psychologists the task of identifying a one-to-one correlation between a mental state and a physiological state in the brain.

Working in the 1920s, neurophysiologist Warren McCulloch attempted to solve this “mind/body problem” by proposing a “psychological theory of mental atoms”. In the 1940s he joined Nicholas Rashevsky’s influential biophysics group, which was attempting to bring the mathematical techniques used in physics to bear on the problems of nueroscience.

Key to these efforts were attempts to build simplified models of how biological neurons might work, which could then be refined into more sophisticated, mathematically rigorous explanations.

If you have vague recollections of your high school physics teacher trying to explain the motion of particles by analogy with billiard balls or long metal slinkies, you get the general picture. Start with some very simple assumptions, understand the basic relations and work out the complexities later. In other words, assume a spherical cow.

In 1943, McCulloch and logician Walter Pitts proposed a simple model of neurons meant to explain the “heat illusion” phenomenon. While it was ultimately an unsuccessful picture of how neurons work – McCulloch and Pitts later abandoned it – it was a very helpful tool for designing logic circuits. Early computer scientists adapted their work into what is now known as logic design, where the naming conventions – “neural networks” for example – have persisted to this day.

That computer scientists still use terms like these seems to have fuelled the popular misconception that there is an intrinsic link between certain kinds of computer programs and the human brain. It is as though the simplified assumption of a spherical cow turned out to be a useful way to describe how ball pits should be designed and left us all believing there is some necessary link between children’s play equipment and dairy farming.

This would be not much more than a curiosity of intellectual history were it not the case that these misconceptions are shaping our policy responses to AI.

Is the solution to force lawyers, judges and policy-makers to pass high school calculus before they start talking about AI? Certainly they would object to any such proposal. But in the absence of better mathematical literacy we need to use better analogies.

While the Full Federal Court has since overturned Justice Beech’s decision in Thaler, it specifically noted the need for policy development in this area. Without giving non-specialists better ways of understanding and talking about AI, we’re likely to continue to have the same challenges.