Posted on

Meta AI Is Getting a New Memory Feature and Personalised Recommendations

Meta AI is getting a couple of new upgrades, the company announced on Monday. Meta wants its artificial intelligence (AI) chatbot to offer a more personalised experience to users, and the two new features will enable it to learn more about users. The first is a memory feature that will allow the chatbot to remember certain information shared by the user in individual chats, and the second is a personalised recommendation feature that will allow Meta AI to look through the user’s social media profiles and in-app activities to suggest relevant information.

In a newsroom post, the social media giant announced two new ways it is making Meta AI more personalised for users. The company said it has been experimenting with a new memory feature that allows the chatbot to remember certain information about the user.

Memory in Meta AI can only be saved in individual chats. Users can either specifically tell the AI to remember particular details, or it can remember certain information automatically during conversations. For instance, if a user asks the chatbot to suggest breakfast ideas and it suggests an omelette, the user can tell Meta AI that they are vegetarian and it will remember this. In future conversations, the AI will then only suggest vegetarian meal ideas.

meta ai memory Meta AI memory

Memory in Meta AI
Photo Credit: Meta AI

Meta did not share what kind of information can be saved by the AI, and whether it will include sensitive information such as financial and medical details. However, users will be notified whenever the chatbot saves a new piece of information and they will be allowed to delete memory manually.

Memory in Meta AI is rolling out to Facebook, Messenger, and WhatsApp for iOS and Android in the US and Canada.

The second feature allows the chatbot to collect information about the user to generate personalised recommendations. The social media giant highlighted that the information will be taken from user profiles on Facebook and Instagram as well as in-app activities such as watching Reels, liking and commenting on posts, and more.

Explaining how the feature would work, the post stated that if a user asks for recommendations for a fun activity with the family on the weekend, Meta AI can find the user’s home location from Facebook, go through recent views of Reels to find the activities the user might be interested in, and information from the memory feature to recommend a music concert. This feature will be available on Facebook, Messenger, and Instagram in the US and Canada.

Notably, the company did not address if users will have a choice in deciding whether to share this information with Meta AI.

Posted on

OpenAI Faces New Copyright Case, From Global Publishers in India

Indian book publishers and their international counterparts have filed a copyright lawsuit against OpenAI in New Delhi, a representative said on Friday, the latest in a series of global cases seeking to stop the ChatGPT chatbot accessing proprietary content.

Courts across the world are hearing claims by authors, news outlets and musicians who accuse technology firms of using their copyright work to train AI services and who are seeking to have content used to train the chatbot deleted.

The New Delhi-based Federation of Indian Publishers told Reuters it had filed a case at the Delhi High Court, which is already hearing a similar lawsuit against OpenAI.

The case was filed on behalf of all the federation’s members, who include publishers like Bloomsbury, Penguin Random House, Cambridge University Press and Pan Macmillan, as well as India’s Rupa Publications and S.Chand and Co, it said.

“Our ask from the court is that they should stop (OpenAI from) accessing our copyright content,” Pranav Gupta, the federation’s general secretary said in an interview about the lawsuit, which concerns the ChatGPT tool’s book summaries.

“In case they don’t want to do licensing with us, they should delete datasets used in AI training and explain how we will be compensated. This impacts creativity,” he added.

OpenAI did not respond to a request for comment on the allegations and the lawsuit, which was filed in December but is being reported here for the first time. It has repeatedly denied such allegations, saying its AI systems make fair use of publicly available data.

OpenAI kicked off an investment, consumer and corporate frenzy in generative AI after the Nov. 2022 launch of ChatGPT. It wants to be ahead in the AI race after raising $6.6 billion last year.

The Indian book publishers’ group is seeking to join Indian news agency ANI’s lawsuit against the Microsoft-backed OpenAI, which is the most high-profile legal proceeding in the nation on this subject.

“These cases represent a pivotal moment and can potentially shape the future legal framework on AI in India. The judgment passed here will test the balance between protecting IP and promoting tech advancement,” said Siddharth Chandrashekhar, a Mumbai based lawyer.

Responding to the ANI case, OpenAI said in comments reported by Reuters this week that any order to delete training data would result in a violation of its U.S. legal obligations, and Indian judges have no right to hear a copyright case against the company as its servers are located abroad.

The federation said OpenAI offers services in India so its activities should fall under Indian laws.

Reuters, which holds a 26% interest in ANI, has said in a statement it is not involved in its business practices or operations.

OpenAI made its first India hire last year when it tapped former WhatsApp executive, Pragya Misra, to handle public policy and partnerships in the country of 1.4 billion people, where millions of new users are going online, thanks to cheap mobile data prices.

Worries Over Book Summaries

A Reuters reporter asked ChatGPT on Friday for details of the first volume of the Harry Potter series by J. K. Rowling, published by Bloomsbury. The AI tool responded with a chapter-by-chapter summary and a key events summary including the story’s climax.

It stopped short of giving the actual text, however, saying, “I cannot provide the entire text of the book, as it is copyrighted material.”

Penguin Random House in November said it has started a global initiative to include a statement on the copyright page of its titles saying “no part of this book may be used or reproduced in any manner for the purpose of training” AI technologies.

The Indian federation’s December filing, which was seen by Reuters, argues it has obtained “credible evidence/information” from its members that OpenAI used their literary works to train its ChatGPT service.

“This free tool produces book summaries, extracts, why would people buy books then?” Gupta said, referring to AI chatbots using extracts from unlicensed online copies. “This will impact our sales, all members are concerned about this.”

The federation’s plea has so far only been listed before a court registrar in New Delhi who on Jan 10 asked OpenAI to respond in the matter. A judge will now hear the case on Jan. 28.

© Thomson Reuters 2024

Posted on

Google Titans AI Architecture Unveiled With Ability to Solve Long-Term Memory Issues in AI Models

Google researchers unveiled a new artificial intelligence (AI) architecture last week that can enable large language models (LLMs) to remember the long-term context of events and topics. A paper was published by the Mountain View-based tech giant on the topic, and the researchers claim that AI models trained using this architecture displayed a more “human-like” memory retention capability. Notably, Google ditched the traditional Transformer and Recurrent Neural Network (RNN) architectures to develop a new method to teach AI models how to remember contextual information.

Titans Can Scale AI Models’ Context Window More Than 2 Million Tokens

The lead researcher of the project, Ali Behrouz, posted about the new architecture on X (formerly known as Twitter). He claimed that the new architecture provides a meta in-context memory with attention that teaches AI models how to remember the information at test-time compute.

According to Google’s paper, which has been published in the pre-print online journal arXiv, the Titans architecture can scale the context window of AI models to larger than two million tokens. Memory has been a tricky problem to solve for AI developers.

Humans remember information and events with context. If someone asked a person about what he wore last weekend, they would be able to remember additional contextual information, such as attending a birthday party of a person who they have known for the last 12 years.This way, when asked a follow-up question about why they wore a brown jacket and denim jeans last weekend, the person would be able to contextualise it with all these short-term and long-term information.

AI models, on the other hand, typically use retrieval-augmented generation (RAG) systems, modified for Transformer and RNN architectures. It uses information as neural nodes. So, when an AI model has been asked a question, it accesses the particular node that contains the main information, as well as the nearby nodes that might contain additional or related information. However, once a query is solved, the information is removed from the system to save processing power.

However, there are two downsides to this. First, an AI model cannot remember information in the long run. If one wanted to ask a follow-up question after a session was over, one would have to provide the full context again (unlike how humans function). Second, AI models do a poor job of retrieving information involving long-term context.

With Titans AI, Behrouz and other Google researchers sought to build an architecture which enables AI models to develop a long-term memory that can be continually run, while forgetting information so that it be computationally optimised.

To this end, the researchers designed an architecture that encodes history into the parameters of a neural network. Three variants were used — Memory as Context (MAC), Memory as Gating (MAG), and Memory as a Layer (MAL). Each of these variants is suited for particular tasks.

Additionally, Titans uses a new surprise-based learning systen, which tells AI models to remember unexpected or key information about a topic. These two changes allow Titans architecture to showcase improved memory function in LLMs.

In a separate post, Behrouz claimed that based on internal testing on the BABILong benchmark (needle-in-a-haystack approach), Titans (MAC) models were able to outperform large AI models such as GPT-4, LLama 3 + RAG, and LLama 3 70B.

Posted on

Google Reportedly Working On a Content Filter Feature for Gemini

Google is reportedly working on a new artificial intelligence (AI) feature for its in-house chatbot Gemini. As per the report, the feature was spotted in the latest beta version of the Google app for Android and is called Content filters. As the name suggests, it is believed that the feature will allow users granular control over unwanted or harmful content generated by the AI chatbot. However, since the feature is said to not be public-facing or active, it is unclear how it would function.

Gemini May Get a Content Filter Feature

According to an Android Authority report, the Mountain View-based tech giant is working on a content moderation tool for Gemini. The evidence of the feature was spotted by the publication in the Google app for Android beta version 15.51.24.sa.arm64. Notably, the feature is not public facing so beta testers will not be able to test it out just yet.

The publication also shared a screenshot of the feature. Based on the screenshot, the new feature is available within the Gemini Settings page between the options of Screen context and Manage your Advanced subscription. The new feature is labelled as Content filters.

Underneath the name of the feature, the screenshot also shows a brief description which says, “Use filters to control the responses you see”. Not much else is known about the feature as it is not activated on the server side. Tapping on the Gemini feature reportedly redirects users to a URL on Google’s Gemini website. However, this website is currently not active and the publication was not able to find any information.

However, based on this information, the feature is likely a tool for users to further control the kind of responses they would like to see. It could offer filters in the same way parental controls are available on devices and websites which allows users to only see safe content.

Alternatively, the feature could be expansive and allow users to blacklist websites, ban entire topics, and ground the responses against set verifiers. A less likely possibility is also that this setting allows users to tailor the responses of Gemini by writing style and tonality for all future conversations. However, these are just speculations, and nothing can be said conclusively until Google makes an announcement about the feature.

Posted on

OpenAI’s ChatGPT and Sora Services Now Fully Operational After Suffering a Major Outage

OpenAI’s artificial intelligence (AI) chatbot ChatGPT services suffered a major outage on Thursday in the US and some other regions. As per reports registered on an online outage monitor, the outage began roughly around 1:30pm ET on December 26 (12:00am IST, December 27). The outage also affected the AI firm’s API service, as well as the text-to-image platform Sora. Notably, the issue persisted for nearly five hours before the company updated that the platforms were fully operational again.

OpenAI Suffers a Major Outage

According to the online outage monitoring platform Down Detector, a spike for ChatGPT was first spotted at 1:30pm ET, with about 50,000 users reporting an inability to access the AI chatbot. At 2:00pm ET (12:30am IST), OpenAI posted the first official update on its status page and said, “We are currently experiencing an issue with high error rates on ChatGPT, the API, and Sora.”

Soon after, the AI firm said that the issue was identified to be caused by an “upstream provider”, but did not specify it. Around the same time, Microsoft reported a power issue in one of its data centres in a post on X (formerly known as Twitter), highlighting that it affected the access and functionality of various Microsoft 365 services, Azure, and Xbox cloud gaming.

“We determined that an unexpected power incident in a portion of South Central US, AZ03, has affected multiple services,” the tech giant highlighted in a status page update. Microsoft’s services were back up by 5:00pm ET (3:30am IST). Just an hour later, at 6:15pm ET (4:45am IST), OpenAI shared an update saying Sora was fully operational. It could not be confirmed whether the two outages were related.

OpenAI’s last update came at 6:04pm ET (7:34am IST) where it highlighted that ChatGPT was mostly recovered. At the time of writing this, Gadgets 360 staff members were able to access and interact with ChatGPT on both the web client and the mobile app. The reports on Down Detector have also fallen to a single-digit number. The AI firm has said it will run a full root-cause analysis of this outage.

Posted on

Microsoft Releases AIOpsLab, an Open-Source Standardised AI Framework for AIOps Agents

Microsoft researchers released an open-source artificial intelligence (AI) framework for agents that operate in cloud environments. Dubbed AIOpsLab, it is a principled research framework that enables developers to build, test, compare, and improve AIOps agents. The framework is supported by Azure AI Agent Service. The AIOpsLab uses an intermediary interface, a workload and fault generator, as well as an observability layer that shows a wide array of telemetry data. Notably, the company said that a research paper on the framework was accepted at the annual ACM Symposium on Cloud Computing (SoCC’24).

Microsoft Releases AIOpsLab for Cloud-Based Agents

Cloud-based services and enterprises that leverage them often face significant operational challenges, specifically in fault diagnosis and mitigation. AIOps agents, also known as AI agents for IT operations, are software-based tools that are used to monitor, analyse, and optimise cloud systems and solve these operational challenges.

Microsoft researchers, in a blog post, highlighted that when it comes to incident root cause analysis (RCA) or triaging, these AIOps agents rely on proprietary services and datasets, and use frameworks that only cater to specific solutions. This fails to capture the dynamic nature of real-world cloud services.

To solve this pain point, the company released an open-source standardised framework dubbed AIOpsLab for developers and researchers that will enable them to design, develop, evaluate, and enhance the capabilities of agents. One of the fundamental ways it solves the problem is by strictly separating the agent and the application service using an intermediate interface. This interface can be used to integrate and extend other system parts.

This enables the AIOps agent to address the problem in a step-by-step manner, mimicking real-life scenarios. For instance, the agent can be taught to first find the problem description, then understand the instructions, and then use available application programming interfaces (APIs) to call as actions.

The AIOpsLabs also comes with a workload and fault generator that can be used to train these AI agents. It can create simulations of both faulty and normal scenarios to enable the AIOps agents to gain knowledge of solving them and eliminate any unwanted behaviour.

Additionally, the AIOpsLab also comes with an extensible observability layer that offers monitoring capabilities to the developer. As the system collects a wide array of telemetry data, the framework can only show the data relevant to particular agents, allowing developers a granular way of making changes.

AIOpsLab currently supports four key tasks within the AIOps domain — incident detection, localisation, root cause diagnosis, and mitigation. Currently, Microsoft’s open-source AI framework is available on GitHub with the MIT licence for personal and commercial use cases.

Affiliate links may be automatically generated – see our ethics statement for details.
Posted on

Google Updates Gemini AI Design on Web Interface and Android App

Google has made several minor adjustments to Gemini’s design on both the web interface and the Android app. While minor, these changes to the artificial intelligence (AI) chatbot will make it easier to use and display more relevant information. On the web, the text field has been redesigned, and certain icons have been repositioned. On the Android app, the model information is now shown and the Saved Info menu has been added. Saved Info was introduced to Gemini last month, and it allows the chatbot to remember information about the user.

Google Gemini App Now Displays AI Model Information

The website version of Gemini has now been more aligned with the app version of the AI chatbot. The design change is minor and only affects the text field of the interface. Earlier, the Upload Images (for free users) or the Plus icon (for Gemini Advanced subscribers) was placed on the right side of the text field.

gemini text redesign Gemini

Gemini web version’s new design

However, now this icon has been placed first on the left side. The “Ask Gemini” text is now placed next to the Plus or Upload Images icon. On the left side, only the microphone icon has been placed. While it might be a minor change, it makes the overall text field look neater while reduces the chances of accidental taps.

Coming to the Android app of Gemini, it has also received some design changes. First, users will now see the AI model information at the top of the screen. When on the homepage, users will see Gemini Advanced followed by the text 1.5 Pro, highlighting that the current model is Gemini 1.5 Pro. This is shown between the history and the account menu.

On Pixel devices, the information is replaced by Gemini 1.5 Flash. Once a user initiates a conversation with the chatbot, the Gemini Advanced text is replaced with just “1.5 Pro”. This was first spotted by 9to5Google.

Second, the Saved Info menu has now been added to the account menu. However, tapping on it takes users to the Saved info website in a browser window.

Posted on

OpenAI Said to Aim to Attract More Investment by Removing ‘AGI’ Clause With Microsoft

OpenAI is in discussions to remove a clause that shuts Microsoft out of the start-up’s most advanced models when it achieves “artificial general intelligence”, as it seeks to unlock future investments, the Financial Times reported on Friday.

As per the current terms, when OpenAI creates AGI – defined as a “highly autonomous system that outperforms humans at most economically valuable work” – Microsoft’s access to such a technology would be void.

The ChatGPT-maker is exploring removing the condition from its corporate structure, enabling Microsoft to continue investing in and accessing all OpenAI technology after AGI is achieved, the FT reported, citing people familiar with the matter.

Microsoft and OpenAI did not immediately respond to Reuters’ requests for comment.

The clause was included to protect the technology from being misused for commercial purposes, giving its ownership to OpenAI’s non-profit board.

“AGI is explicitly carved out of all commercial and IP licensing agreements,” according to OpenAI’s website.

The OpenAI board would determine when AGI is achieved, the website said.

OpenAI’s board is discussing the options and a final decision has not been made, the FT report said.

Microsoft-backed OpenAI was working on a plan to restructure its core business into a for-profit benefit corporation no longer governed by its non-profit board, Reuters reported first in September.

In October, OpenAI closed a $6.6 billion funding round which valued it at $157 billion.

© Thomson Reuters 2024

Posted on

Amazon Web Services (AWS) Launches Automated Reasoning Checks in Preview to Combat AI Hallucinations

Amazon Web Services (AWS) launched a new service at its ongoing re:Invent conference that will help enterprises reduce instances of artificial intelligence (AI) hallucination. Launched on Monday, the Automated Reasoning checks tool is available in preview and can be found within the Amazon Bedrock Guardrails. The company claimed that the tool mathematically validates the accuracy of responses generated by large language models (LLMs) and prevents factual errors from hallucinations. It is similar to the Grounding with Google Search feature which is available on both the Gemini API as well as the Google AI Studio.

AWS Automated Reasoning Checks

AI models can often generate responses that are incorrect, misleading, or fictional. This is known as AI hallucination, and the issue impacts the credibility of AI models, especially when used in an enterprise space. While companies can somewhat mitigate the issue by training the AI system on high-quality organisational data, the pre-training data and architectural flaws can still make the AI hallucinate.

AWS detailed its solution to AI hallucination in a blog post. The Automated Reasoning checks tool has been introduced as a new safeguard and is added in preview within Amazon Bedrock Guardrails. Amazon explained that it uses “mathematical, logic-based algorithmic verification and reasoning processes” to verify the information generated by LLMs.

The process is pretty straightforward. Users will have to upload relevant documents that describe the rules of the organisation to the Amazon Bedrock console. Bedrock will automatically analyse these documents and create an initial Automated Reasoning policy, which will convert the natural language text into a mathematical format.

Once done, users can move to the Automated Reasoning menu under the Safeguards section. There, a new policy can be created and users can add existing documents that contain the information that the AI should learn. Users can also manually set processing parameters and the policy’s intent. Additionally, sample questions and answers can also be added to help the AI understand a typical interaction.

Once all of this is done, the AI will be ready to be deployed, and the Automated Reasoning checks tool will automatically verify in case the chatbot provides any incorrect responses. Currently, the tool is available in preview in only the US West (Oregon) AWS region. The company plans to roll it out to other regions soon.

Posted on

World Labs Unveils AI System That Can Generate 3D Interactive Worlds Using an Image

World Labs, the artificial intelligence (AI) startup, unveiled its first AI system on Monday. The currently unnamed AI system can generate interactive 3D worlds using an image input. These generated worlds turn the 2D visual asset into explorable 3D scenes where users can navigate using a keyboard and mouse. The AI system is currently in preview and has not been made public. However, the startup, which was founded by the computer scientist Fei-Fei Li, stated that it is working to release the full version soon.

World Labs Unveils AI System Capable of Generating 3D Worlds

In a blog post, the San Francisco-based startup showcased the capabilities of its AI model. World Labs highlighted that most generative AI tools today can create 2D content such as images or videos. Although some AI firms do generate 3D models from 2D images or text prompts, the scope is pretty limited. In recent times, only Google DeepMind has unveiled an AI model that generates unique 2D video game levels.

However, based on the interactive assets shared by the startup, the unnamed AI system’s capabilities surpass the generative capabilities seen so far. Put short, the company claims users can add any image that depicts a scene as input, and the AI model can generate a 3D interactive version of the scene. This means users can move forward, backwards, and side-to-side and explore the generated area.

The AI model does not only generate three-dimensional renders of the objects in the image, it also creates unseen details such as new alleyways, ceiling art, new objects and more from scratch. World Labs claim that apart from the initial image, everything is generated by the AI system.

Additionally, the generated scenes can also be modified. Users can change camera angles, depth, and zoom as well as add 3D effects to the background as well as the objects in the foreground.

World Labs’ AI system can also be integrated with other AI tools. The startup said this will allow creators to first generate the starting image using a familiar text-to-image generator such as Ideogram, Dall-E, or Midjourney and then create a 3D world using the startup’s tool. The AI firm is currently working with a few creators to test the AI system’s capabilities and its 3D-native generative AI workflow.

As of now, the AI system is not available in the public domain, and the startup highlighted that it is still working on it to improve the size and the fidelity of the generated worlds. However, interested individuals can join the company’s waitlist to be informed when the AI system is released.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Samsung Galaxy Z Fold 7, Galaxy Z Flip 7 to Debut With Larger Displays: Report