Posted on

Xiaomi 15 Series Gets Google Gemini Integration; HyperOS 2.0 Global Rollout Timeline Revealed

Xiaomi announced a collaboration with Google on Sunday, ahead of the upcoming Mobile World Congress (MWC 2025) in Barcelona. With this collaboration, the Mountain View-based tech giant will allow its Gemini artificial intelligence (AI) chatbot to be integrated within first-party Xiaomi apps in the Xiaomi 15 series. With this, Gemini can find information and take action across several apps of the Chinese consumer tech brand. Alongside, the company also launched the Xiaomi 15 Ultra and Xiaomi 15 in global markets. 

Xiaomi Brings Gemini AI Capabilities to the Xiaomi 15 Series

In a blog post, the tech giant announced that it has partnered with Google to bring its AI services to the Xiaomi 15 series. With this collaboration, the Gemini AI chatbot is now being integrated with several Xiaomi apps including Xiaomi Notes, Xiaomi Calendar, and Xiaomi Clock. This is the first time the Chinese brand is letting a third-party AI service provider access its first-party apps.

While Xiaomi did not reveal a lot about how this integration will function, it highlighted that Gemini will now be able to access information from within these apps as well as complete several tasks in them. These tasks likely include actions such as creating a new note, calendar event, or setting an alarm. However, it is unclear whether users will also be able to use Gemini to edit or delete a note, event, or alarm.

In the footnote, Xiaomi stated that the availability of the feature might vary by device, country, and language, and that Internet connection will be required. Users of Xiaomi 15 series will likely have to use the Gemini AI assistant to access these features. These are server-based features, and an Internet connection will be required.

With this integration, Xiaomi is also expanding its AI features, which are so far limited to its camera features and the China-exclusive AI-powered Super Xiao AI assistant. The Xiaomi 15 series also gets Google’s Circle to Search, which is an AI-powered visual lookup feature.

Xiaomi Reveals HyperOS 2 Rollout Timeline

Meanwhile, in a post on X (formerly known as Twitter), the official handle of Xiaomi announced that the HyperOS 2 operating system will now be globally available out-of-the-box with the Xiaomi 15 series and give users early access to the Hyper AI suite of features. The company, however, did not delve too much into the AI features that users will get to try out. The announcement also hints at the Chinese brand’s ambition to rival Samsung’s Galaxy AI features and Oppo’s suite of in-house AI tools.

As per the post, the HyperOS 2 with HyperAI is now available in the Xiaomi 15 and Xiaomi 15 Ultra models in global markets. The company’s recently released Pad 7 and Pad 7 Pro will also get the update out-of-the-box alongside the Xiaomi Watch S4 and Redmi Watch 5. However, the Watch S4 and Watch 5 will not be getting AI features.

HyperAI will also be rolled out to the Xiaomi 14 lineup, Xiaomi Mix Flip, Redmi Note 14 Pro+ 5G, and Xiaomi Pad 6S Pro starting April.

Coming to HyperOS 2 rollout schedule, the company stated that the Xiaomi 13T Pro, Redmi Note 13 series, and Smart Band 9 Pro will get the update by the end of March.

The remaining Xiaomi 13 series, the entire Xiaomi 12 series, Mi 11 and Mi 11 Ultra, Xiaomi 11 Lite 5G NE, Redmi 13 series, Redmi Note 12 series, Redmi 12 5G, and Redmi 12 will get the update by May. In the same period, HyperOS 2 update will be released for the Xiaomi Pad 6, Redmi Pad Pro 5G, Redmi Pad Pro, Redmi Pad SE 8.7 4G, Redmi Pad SE 8.7, and Redmi Pad SE.

Additionally, the Redmi Note 14 series, Redmi A3 Pro, and Redmi 14C will get the update between March and June.

Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2025 hub.

Posted on

Intel Xeon 6700, 6500 Series Processors With Performance Cores Announced

Intel unveiled two new processor series in the Xeon 6 family on Monday. Dubbed Intel Xeon 6700 and Xeon 6500 series, the new processors offer improved performance and power efficiency compared to the previous generation. It is based on the x86 computer architecture and comes with up to 86 cores. The chipmaker claimed that the newly launched chipsets also feature dedicated Performance cores (P-cores) and are aimed at complex artificial intelligence (AI) tasks, traditional enterprise apps, and high-performance computing (HPC) solutions.

Intel Xeon 6700/6500 Series Processors Introduced

In a newsroom post, the chipmaker detailed the new processors in the Intel Xeon 6 family. Alongside the 6700 and 6500 series, the company also unveiled Xeon 6 for network and edge, a system-on-chip (SoC) designed for high performance and power efficiency. Notably, these are not retail-focused processors, and are instead aimed at data centres.

According to the company, the Intel Xeon 6700 and 6500 series processors with P-cores offer 1.4X higher performance compared to the fifth generation of Intel Xeon processors, and can also handle a more diverse set of workloads, with a focus on enterprise tasks. The company said that it is also designed to work with AI systems and can handle complex tasks.

intel xeon 6700 Intel Xeon 6

Intel Xeon 6700 and 6500 series specifications
Photo Credit: Intel

Compared to the fifth-generation AMD EPYC processors, the Xeon 6700 and 6500 series chipsets are claimed to offer up to 1.5X better performance in AI inference while using only two-thirds of the cores. Intel also claimed that the chipsets offer improved performance-per-watt efficiency allowing for 5:1 consolidation of a five-year-old server on average.

Coming to technical specifications, the new chipsets feature up to 86 cores with a Thermal Design Power (TDP) of between 150 to 300W. It comes with up to eight channels of DDR5 MRDIMM (multiplex ranked DIMM) memory, up to 88 lanes for PCIe 5.0 which goes up to 136 lanes for single socket design, as well as 64 lanes CXL 2.0.

On the other hand, the Intel Xeon 6 for network and edge uses in-built accelerators for virtualised radio access networks (vRAN), media, AI, and network security. The company said the processor is designed to address the growing demand for network and edge solutions in the AI-driven world.

These SoCs also deliver a 70 percent improvement in performance-per-watt with 2.4X the RAN capacity compared to the previous generation. The Xeon 6 for network and edge also features the Intel Media Transcode Accelerator, an in-built media accelerator that further improves power optimisation.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Realme GT 7 India Variant Key Specifications, Colour Options Surface Online

Samsung Set to Refresh Its Galaxy A Series Lineup With Improved Camera Features

Posted on

Researchers Discover DeepSeek Has Links to Chinese Telecom Firm Banned in US: Report

DeepSeek source code reportedly contains evidence that links the popular artificial intelligence (AI) chatbot with a Chinese telecommunication provider that was banned in the US. According to a report, a cybersecurity firm has uncovered code that could be used to send data entered on DeepSeek’s web client to China Mobile. The code reportedly relates to the account creation and login process on the Chinese AI chatbot platform. While it could not be confirmed that DeepSeek is indeed sending data, the researchers were also not able to rule out the possibility.

DeepSeek Code Links Chatbot to China Mobile

The Associated Press (AP) reported that DeepSeek contains code that could potentially send user login information to China Mobile. The publication claimed that it received a report about the code from the Canada-based cybersecurity firm Feroot Security. Multiple independent experts reportedly verified these claims.

Notably, China Mobile was banned from operating in the US in 2019 after the government raised national security concerns due to the link between the telecom operator and the Chinese government. Additionally, in 2021, the US government also put sanctions on Americans investing in the company after finding evidence of its links with the Chinese military.

The report did not reveal details about the alleged code that links DeepSeek’s chatbot with the telecom operator. However, it discovered the presence of code that enables the AI firm to send login information as well as queries directly to China Mobile’s servers.

The cybersecurity firm also highlighted that the exposed code shows a connection that could be far more nefarious than that of TikTok, which was briefly banned for a few hours in the US, before it was reinstated.

“The implications of this are significantly larger because personal and proprietary information could be exposed. It’s like TikTok but at a much grander scale and with more precision. It’s not just sharing entertainment videos. It’s sharing queries and information that could include highly personal and sensitive business information,” Ivan Tsarynny, CEO of Feroot told AP.

Notably, the researchers have not analysed the mobile app of DeepSeek, which could also contain similar code. The iOS app of DeepSeek recently topped the App Store’s “Top free apps” chart in the US overtaking OpenAI.

Posted on

Microsoft Adds DeepSeek-R1 AI Model to Its Azure AI Foundry and GitHub

Microsoft added support for the recently released DeepSeek-R1 artificial intelligence (AI) model to its Azure AI Foundry and GitHub on Wednesday. The reasoning-focused AI model can now be accessed via these platforms. The Redmond-based tech giant also highlighted that it has conducted rigorous safety evaluations to ensure that the model is safe to use for both individual and commercial purposes. Additionally, the company is also bringing distilled versions of the R1 model to Copilot+ PC users via the AI Toolkit.

Microsoft Brings DeepSeek-R1 to Azure AI Foundry, GitHub

In a blog post, the tech giant announced that the DeepSeek-R1 AI model is now available in the model catalogue of Azure AI Foundry and GitHub. Notably, Azure AI Foundry is an enterprise-focused platform where developers can build, evaluate, and develop generative AI applications and custom copilots.

Making the announcement, Microsoft’s Corporate Vice President of AI Platform, Asha Sharma said, “One of the key advantages of using DeepSeek R1 or any other model on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows.”

The post also highlighted that the company conducted red teaming and safety evaluations on the R1 model, including automated assessment of model behaviour as well as extensive security reviews. Microsoft stated that it has ensured to mitigate potential risks. Azure AI Foundry also provides additional safety via its Content Safety filtering system and the Safety Evaluation System.

To use DeepSeek-R1, users can search for it in the model catalogue. After spotting it, they will have to open the model card and click on deploy. This will provide the user with the inference application programming interface (API) required to run the model.

Distilled DeepSeek-R1 Coming to Copilot+ PCs Users

In a separate post, Microsoft announced that it is bringing optimised versions of DeepSeek-R1 to Copilot+ PCs. These distilled versions will first arrive on the Snapdragon X-powered devices and later will be added to the Intel Core Ultra 200V chipset and others. The first model release will be the DeepSeek-R1-Distill-Qwen-1.5B model which can be accessed via the AI Toolkit. The 7B and 14B variants will be added later.

Posted on

Meta AI Is Getting a New Memory Feature and Personalised Recommendations

Meta AI is getting a couple of new upgrades, the company announced on Monday. Meta wants its artificial intelligence (AI) chatbot to offer a more personalised experience to users, and the two new features will enable it to learn more about users. The first is a memory feature that will allow the chatbot to remember certain information shared by the user in individual chats, and the second is a personalised recommendation feature that will allow Meta AI to look through the user’s social media profiles and in-app activities to suggest relevant information.

In a newsroom post, the social media giant announced two new ways it is making Meta AI more personalised for users. The company said it has been experimenting with a new memory feature that allows the chatbot to remember certain information about the user.

Memory in Meta AI can only be saved in individual chats. Users can either specifically tell the AI to remember particular details, or it can remember certain information automatically during conversations. For instance, if a user asks the chatbot to suggest breakfast ideas and it suggests an omelette, the user can tell Meta AI that they are vegetarian and it will remember this. In future conversations, the AI will then only suggest vegetarian meal ideas.

meta ai memory Meta AI memory

Memory in Meta AI
Photo Credit: Meta AI

Meta did not share what kind of information can be saved by the AI, and whether it will include sensitive information such as financial and medical details. However, users will be notified whenever the chatbot saves a new piece of information and they will be allowed to delete memory manually.

Memory in Meta AI is rolling out to Facebook, Messenger, and WhatsApp for iOS and Android in the US and Canada.

The second feature allows the chatbot to collect information about the user to generate personalised recommendations. The social media giant highlighted that the information will be taken from user profiles on Facebook and Instagram as well as in-app activities such as watching Reels, liking and commenting on posts, and more.

Explaining how the feature would work, the post stated that if a user asks for recommendations for a fun activity with the family on the weekend, Meta AI can find the user’s home location from Facebook, go through recent views of Reels to find the activities the user might be interested in, and information from the memory feature to recommend a music concert. This feature will be available on Facebook, Messenger, and Instagram in the US and Canada.

Notably, the company did not address if users will have a choice in deciding whether to share this information with Meta AI.

Posted on

OpenAI Faces New Copyright Case, From Global Publishers in India

Indian book publishers and their international counterparts have filed a copyright lawsuit against OpenAI in New Delhi, a representative said on Friday, the latest in a series of global cases seeking to stop the ChatGPT chatbot accessing proprietary content.

Courts across the world are hearing claims by authors, news outlets and musicians who accuse technology firms of using their copyright work to train AI services and who are seeking to have content used to train the chatbot deleted.

The New Delhi-based Federation of Indian Publishers told Reuters it had filed a case at the Delhi High Court, which is already hearing a similar lawsuit against OpenAI.

The case was filed on behalf of all the federation’s members, who include publishers like Bloomsbury, Penguin Random House, Cambridge University Press and Pan Macmillan, as well as India’s Rupa Publications and S.Chand and Co, it said.

“Our ask from the court is that they should stop (OpenAI from) accessing our copyright content,” Pranav Gupta, the federation’s general secretary said in an interview about the lawsuit, which concerns the ChatGPT tool’s book summaries.

“In case they don’t want to do licensing with us, they should delete datasets used in AI training and explain how we will be compensated. This impacts creativity,” he added.

OpenAI did not respond to a request for comment on the allegations and the lawsuit, which was filed in December but is being reported here for the first time. It has repeatedly denied such allegations, saying its AI systems make fair use of publicly available data.

OpenAI kicked off an investment, consumer and corporate frenzy in generative AI after the Nov. 2022 launch of ChatGPT. It wants to be ahead in the AI race after raising $6.6 billion last year.

The Indian book publishers’ group is seeking to join Indian news agency ANI’s lawsuit against the Microsoft-backed OpenAI, which is the most high-profile legal proceeding in the nation on this subject.

“These cases represent a pivotal moment and can potentially shape the future legal framework on AI in India. The judgment passed here will test the balance between protecting IP and promoting tech advancement,” said Siddharth Chandrashekhar, a Mumbai based lawyer.

Responding to the ANI case, OpenAI said in comments reported by Reuters this week that any order to delete training data would result in a violation of its U.S. legal obligations, and Indian judges have no right to hear a copyright case against the company as its servers are located abroad.

The federation said OpenAI offers services in India so its activities should fall under Indian laws.

Reuters, which holds a 26% interest in ANI, has said in a statement it is not involved in its business practices or operations.

OpenAI made its first India hire last year when it tapped former WhatsApp executive, Pragya Misra, to handle public policy and partnerships in the country of 1.4 billion people, where millions of new users are going online, thanks to cheap mobile data prices.

Worries Over Book Summaries

A Reuters reporter asked ChatGPT on Friday for details of the first volume of the Harry Potter series by J. K. Rowling, published by Bloomsbury. The AI tool responded with a chapter-by-chapter summary and a key events summary including the story’s climax.

It stopped short of giving the actual text, however, saying, “I cannot provide the entire text of the book, as it is copyrighted material.”

Penguin Random House in November said it has started a global initiative to include a statement on the copyright page of its titles saying “no part of this book may be used or reproduced in any manner for the purpose of training” AI technologies.

The Indian federation’s December filing, which was seen by Reuters, argues it has obtained “credible evidence/information” from its members that OpenAI used their literary works to train its ChatGPT service.

“This free tool produces book summaries, extracts, why would people buy books then?” Gupta said, referring to AI chatbots using extracts from unlicensed online copies. “This will impact our sales, all members are concerned about this.”

The federation’s plea has so far only been listed before a court registrar in New Delhi who on Jan 10 asked OpenAI to respond in the matter. A judge will now hear the case on Jan. 28.

© Thomson Reuters 2024

Posted on

Google Titans AI Architecture Unveiled With Ability to Solve Long-Term Memory Issues in AI Models

Google researchers unveiled a new artificial intelligence (AI) architecture last week that can enable large language models (LLMs) to remember the long-term context of events and topics. A paper was published by the Mountain View-based tech giant on the topic, and the researchers claim that AI models trained using this architecture displayed a more “human-like” memory retention capability. Notably, Google ditched the traditional Transformer and Recurrent Neural Network (RNN) architectures to develop a new method to teach AI models how to remember contextual information.

Titans Can Scale AI Models’ Context Window More Than 2 Million Tokens

The lead researcher of the project, Ali Behrouz, posted about the new architecture on X (formerly known as Twitter). He claimed that the new architecture provides a meta in-context memory with attention that teaches AI models how to remember the information at test-time compute.

According to Google’s paper, which has been published in the pre-print online journal arXiv, the Titans architecture can scale the context window of AI models to larger than two million tokens. Memory has been a tricky problem to solve for AI developers.

Humans remember information and events with context. If someone asked a person about what he wore last weekend, they would be able to remember additional contextual information, such as attending a birthday party of a person who they have known for the last 12 years.This way, when asked a follow-up question about why they wore a brown jacket and denim jeans last weekend, the person would be able to contextualise it with all these short-term and long-term information.

AI models, on the other hand, typically use retrieval-augmented generation (RAG) systems, modified for Transformer and RNN architectures. It uses information as neural nodes. So, when an AI model has been asked a question, it accesses the particular node that contains the main information, as well as the nearby nodes that might contain additional or related information. However, once a query is solved, the information is removed from the system to save processing power.

However, there are two downsides to this. First, an AI model cannot remember information in the long run. If one wanted to ask a follow-up question after a session was over, one would have to provide the full context again (unlike how humans function). Second, AI models do a poor job of retrieving information involving long-term context.

With Titans AI, Behrouz and other Google researchers sought to build an architecture which enables AI models to develop a long-term memory that can be continually run, while forgetting information so that it be computationally optimised.

To this end, the researchers designed an architecture that encodes history into the parameters of a neural network. Three variants were used — Memory as Context (MAC), Memory as Gating (MAG), and Memory as a Layer (MAL). Each of these variants is suited for particular tasks.

Additionally, Titans uses a new surprise-based learning systen, which tells AI models to remember unexpected or key information about a topic. These two changes allow Titans architecture to showcase improved memory function in LLMs.

In a separate post, Behrouz claimed that based on internal testing on the BABILong benchmark (needle-in-a-haystack approach), Titans (MAC) models were able to outperform large AI models such as GPT-4, LLama 3 + RAG, and LLama 3 70B.

Posted on

Google Reportedly Working On a Content Filter Feature for Gemini

Google is reportedly working on a new artificial intelligence (AI) feature for its in-house chatbot Gemini. As per the report, the feature was spotted in the latest beta version of the Google app for Android and is called Content filters. As the name suggests, it is believed that the feature will allow users granular control over unwanted or harmful content generated by the AI chatbot. However, since the feature is said to not be public-facing or active, it is unclear how it would function.

Gemini May Get a Content Filter Feature

According to an Android Authority report, the Mountain View-based tech giant is working on a content moderation tool for Gemini. The evidence of the feature was spotted by the publication in the Google app for Android beta version 15.51.24.sa.arm64. Notably, the feature is not public facing so beta testers will not be able to test it out just yet.

The publication also shared a screenshot of the feature. Based on the screenshot, the new feature is available within the Gemini Settings page between the options of Screen context and Manage your Advanced subscription. The new feature is labelled as Content filters.

Underneath the name of the feature, the screenshot also shows a brief description which says, “Use filters to control the responses you see”. Not much else is known about the feature as it is not activated on the server side. Tapping on the Gemini feature reportedly redirects users to a URL on Google’s Gemini website. However, this website is currently not active and the publication was not able to find any information.

However, based on this information, the feature is likely a tool for users to further control the kind of responses they would like to see. It could offer filters in the same way parental controls are available on devices and websites which allows users to only see safe content.

Alternatively, the feature could be expansive and allow users to blacklist websites, ban entire topics, and ground the responses against set verifiers. A less likely possibility is also that this setting allows users to tailor the responses of Gemini by writing style and tonality for all future conversations. However, these are just speculations, and nothing can be said conclusively until Google makes an announcement about the feature.

Posted on

OpenAI’s ChatGPT and Sora Services Now Fully Operational After Suffering a Major Outage

OpenAI’s artificial intelligence (AI) chatbot ChatGPT services suffered a major outage on Thursday in the US and some other regions. As per reports registered on an online outage monitor, the outage began roughly around 1:30pm ET on December 26 (12:00am IST, December 27). The outage also affected the AI firm’s API service, as well as the text-to-image platform Sora. Notably, the issue persisted for nearly five hours before the company updated that the platforms were fully operational again.

OpenAI Suffers a Major Outage

According to the online outage monitoring platform Down Detector, a spike for ChatGPT was first spotted at 1:30pm ET, with about 50,000 users reporting an inability to access the AI chatbot. At 2:00pm ET (12:30am IST), OpenAI posted the first official update on its status page and said, “We are currently experiencing an issue with high error rates on ChatGPT, the API, and Sora.”

Soon after, the AI firm said that the issue was identified to be caused by an “upstream provider”, but did not specify it. Around the same time, Microsoft reported a power issue in one of its data centres in a post on X (formerly known as Twitter), highlighting that it affected the access and functionality of various Microsoft 365 services, Azure, and Xbox cloud gaming.

“We determined that an unexpected power incident in a portion of South Central US, AZ03, has affected multiple services,” the tech giant highlighted in a status page update. Microsoft’s services were back up by 5:00pm ET (3:30am IST). Just an hour later, at 6:15pm ET (4:45am IST), OpenAI shared an update saying Sora was fully operational. It could not be confirmed whether the two outages were related.

OpenAI’s last update came at 6:04pm ET (7:34am IST) where it highlighted that ChatGPT was mostly recovered. At the time of writing this, Gadgets 360 staff members were able to access and interact with ChatGPT on both the web client and the mobile app. The reports on Down Detector have also fallen to a single-digit number. The AI firm has said it will run a full root-cause analysis of this outage.

Posted on

Microsoft Releases AIOpsLab, an Open-Source Standardised AI Framework for AIOps Agents

Microsoft researchers released an open-source artificial intelligence (AI) framework for agents that operate in cloud environments. Dubbed AIOpsLab, it is a principled research framework that enables developers to build, test, compare, and improve AIOps agents. The framework is supported by Azure AI Agent Service. The AIOpsLab uses an intermediary interface, a workload and fault generator, as well as an observability layer that shows a wide array of telemetry data. Notably, the company said that a research paper on the framework was accepted at the annual ACM Symposium on Cloud Computing (SoCC’24).

Microsoft Releases AIOpsLab for Cloud-Based Agents

Cloud-based services and enterprises that leverage them often face significant operational challenges, specifically in fault diagnosis and mitigation. AIOps agents, also known as AI agents for IT operations, are software-based tools that are used to monitor, analyse, and optimise cloud systems and solve these operational challenges.

Microsoft researchers, in a blog post, highlighted that when it comes to incident root cause analysis (RCA) or triaging, these AIOps agents rely on proprietary services and datasets, and use frameworks that only cater to specific solutions. This fails to capture the dynamic nature of real-world cloud services.

To solve this pain point, the company released an open-source standardised framework dubbed AIOpsLab for developers and researchers that will enable them to design, develop, evaluate, and enhance the capabilities of agents. One of the fundamental ways it solves the problem is by strictly separating the agent and the application service using an intermediate interface. This interface can be used to integrate and extend other system parts.

This enables the AIOps agent to address the problem in a step-by-step manner, mimicking real-life scenarios. For instance, the agent can be taught to first find the problem description, then understand the instructions, and then use available application programming interfaces (APIs) to call as actions.

The AIOpsLabs also comes with a workload and fault generator that can be used to train these AI agents. It can create simulations of both faulty and normal scenarios to enable the AIOps agents to gain knowledge of solving them and eliminate any unwanted behaviour.

Additionally, the AIOpsLab also comes with an extensible observability layer that offers monitoring capabilities to the developer. As the system collects a wide array of telemetry data, the framework can only show the data relevant to particular agents, allowing developers a granular way of making changes.

AIOpsLab currently supports four key tasks within the AIOps domain — incident detection, localisation, root cause diagnosis, and mitigation. Currently, Microsoft’s open-source AI framework is available on GitHub with the MIT licence for personal and commercial use cases.

Affiliate links may be automatically generated – see our ethics statement for details.