In Short:
xAI’s Grok is a new AI assistant that users are warned may provide inaccurate information and should not be trusted without verification. Users automatically share their X data with Grok, raising privacy concerns, especially in the EU where regulations require consent. To protect their data, users can make their accounts private and opt out of training settings. It’s essential to stay informed about privacy policies.
X has issued a cautionary statement regarding the use of its artificial intelligence assistant, Grok, emphasizing the user’s responsibility to assess the accuracy of the AI’s outputs. On its help page, xAI clarifies that “This is an early version of Grok,” noting that the chatbot may “confidently provide factually incorrect information, missummarize, or miss some context.” The company advises users to “independently verify any information you receive,” stressing the importance of not sharing personal or sensitive information during interactions with Grok.
Data Collection Concerns
The issue of extensive data collection represents another significant concern. Users are automatically opted in to share their X data with Grok, regardless of whether they use the AI assistant. According to the xAI Help Center, the company may utilize users’ X posts alongside interactions and results generated with Grok for training and fine-tuning purposes. Marijus Briedis, Chief Technology Officer at NordVPN, highlights the “significant privacy implications” inherent to Grok’s training strategy. He points out that the AI’s ability to access and analyze potentially sensitive information, coupled with its capacity to generate content with minimal moderation, raises additional alarm bells.
Training Data and Regulatory Scrutiny
While Grok-1 was trained on publicly available data up to Q3 2023 and did not utilize X data for pre-training, Grok-2 has been explicitly trained on all user posts, interactions, and results on X, with users being automatically opted in. Angus Allan, Senior Product Manager at CreateFuture, a digital consultancy specializing in AI deployment, asserts that xAI may have overlooked compliance with the EU’s General Data Protection Regulation (GDPR) regarding user consent for personal data usage. This oversight has led to pressure from EU regulators for X to suspend training on EU users shortly after the launch of Grok-2.
The ramifications of ignoring user privacy laws could extend beyond Europe, as similar scrutiny may arise in other jurisdictions. Although the US lacks a comprehensive privacy regulation akin to the GDPR, the Federal Trade Commission has previously penalized Twitter for failing to respect user privacy preferences, as noted by Allan.
Opting Out of Data Use
To prevent personal posts from being utilized in training Grok, users have the option to make their accounts private and adjust X privacy settings to opt out of future model training. This can be accomplished by navigating to Privacy & Safety > Data Sharing and Personalization > Grok and unchecking the option that permits the use of posts and interactions for training purposes. Allan warns that even if a user stops using X, past posts—including images—can still be accessed and used for model training unless explicitly opted out.
Moreover, xAI allows users to delete their entire conversation history in a single action, with deleted conversations removed from its systems within 30 days, unless retained for security or legal reasons.
Future Considerations
As the development of Grok continues, the trajectory remains uncertain. However, given its current practices, monitoring Musk’s AI assistant closely is advisable. Briedis recommends that users remain vigilant about the information they share on X and stay informed about any changes to privacy policies or terms of service. Engaging with these settings is critical for controlling how personal information is managed and potentially utilized by technologies like Grok.