The MongoDB Dashboard Nightmare That Taught Me Everything(Almost) About QuickSight

Creating Amazon QuickSight Dashboards with Amazon Athena: A Developer’s Guide Through Data Visualization Complexity

A comprehensive guide born from countless hours of caffeinated debugging, multiple solution architect consultations, and the gradual acceptance that sometimes the simplest path involves three different AWS services and a substantial amount of patience.

What is Amazon QuickSight?

Amazon QuickSight is AWS’s cloud-based business intelligence service that promises to turn your data into beautiful, interactive dashboards. Think of it as PowerBI’s cloud-native cousin who went to Stanford and now works exclusively with AWS services. It’s designed to handle everything from simple pie charts to complex multi-tenant analytics, though as you’ll discover, the journey from data to dashboard can be more scenic than expected.

The service integrates with virtually every AWS data service, supports both SPICE in-memory processing and direct query modes, and offers embedding capabilities that range from straightforward to architecturally complex. It’s particularly well-suited for organizations already invested in the AWS ecosystem who want their data visualization to play nicely with their existing infrastructure.

Prologue: The MongoDB Integration Quest

Before diving into this guide, let me share the origin story of why this documentation exists. It began with what seemed like a straightforward request: “The Client wanted Dashboard in their Application”, which lead to “Can you connect our MongoDB database to QuickSight for some dashboards?”

Simple enough, right? Well, as it turns out, direct MongoDB integration exists in that special AWS dimension where features go to be perpetually “under consideration.” After spending approximately some hours(days) researching MongoDB-to-QuickSight connectors, We discovered the harsh reality that direct connectivity was about as available as a parking spot during a tech conference.

The journey involved multiple creative interpretations of “native connector,” several animated discussions with documentation that seemed to contradict itself, and at least three calls with solution architects who patiently explained why my “obvious” approach wouldn’t work. One existential crisis about the nature of data connectivity in the cloud era later, a particularly wise solution architect suggested: “have you considered Athena as a middle layer?” This question would prove to be the architectural breakthrough that made everything possible.

The Hidden Foundation: DevOps Engineer’s MongoDB-to-Athena Connection

This section should be placed right after the “Prologue” and before “Prerequisites” as it explains the critical infrastructure work that had to happen before I could even access the QuickSight interface.

While I was busy researching QuickSight connectors and having existential crises about data connectivity, our DevOps engineer was working on the real solution. After the solution architect suggested Athena as a middle layer, the DevOps engineer took on the crucial task of establishing the MongoDB-to-Athena connection.

The Athena Lambda Connector Setup

The DevOps engineer navigated to the Amazon Athena console and configured a federated query connection using the following approach:

  1. Accessed Athena Data Sources: In the Athena console, they went to “Data sources” in the navigation pane

  2. Selected DocumentDB Connector: From the published list of data sources, they selected “Amazon DocumentDB” (which works with MongoDB endpoints)

  3. Created Lambda Function: They chose “Create Lambda function” which launched the AWS Serverless Application Repository deployment

  4. Configured Connection Parameters:

    • SecretNameOrPrefix: Set up for secure credential management
    • SpillBucket: Configured S3 bucket for query result overflow
    • AthenaCatalogName: Defined the catalog identifier
    • DocDBConnectionString: This was the critical piece - they entered the MongoDB connection string in the format:
      mongodb://username:password@mongodb-host:27017/database?authSource=admin&readPreference=secondaryPreferred&retryWrites=false
      
    • SecurityGroupIds: Configured network security
    • SubnetIds: Set up VPC networking for Lambda function access
  5. Deployed the Lambda Connector: The connector was deployed as a Lambda function that acts as a bridge between Athena and MongoDB

How It Works

The Lambda function serves as a translator between Athena’s SQL queries and MongoDB’s document structure. When QuickSight (via Athena) needs data, it invokes this Lambda function with the query parameters. The Lambda function then:

  • Connects to MongoDB using the provided connection string
  • Translates the SQL query into MongoDB aggregation pipelines
  • Fetches the data from MongoDB
  • Transforms the document results back into tabular format for Athena
  • Returns the structured data that QuickSight can consume

This setup was essential because it eliminated the need for complex ETL pipelines while still providing real-time data access. The DevOps engineer’s work here was the foundation that made everything else possible, though it remained largely invisible to the end-user experience.

Prerequisites

Before embarking on this journey, ensure you have:

  • AWS account with appropriate permissions and enough budget for the inevitable “learning expenses”
  • Amazon QuickSight subscription because premium data visualization is worth the investment
  • Amazon Athena configured with data in S3, serving as the bridge between your MongoDB dreams and QuickSight reality
  • Proper IAM roles for the diplomatic handshake between QuickSight and Athena
  • Patience, which I’ve learned is non-negotiable
  • Coffee or your stimulant of choice

Step 1: Create Data Source


Data Source Creation

Navigate to Amazon QuickSight console and access the data source creation interface.

From the “Create data source” screen, you’ll find the familiar list of connectors. Notice how MongoDB isn’t there? That’s not a bug, it’s a feature if by “feature” you mean “an opportunity to develop character through adversity.”

Available options include:

  • Amazon Athena which became our eventual savior
  • Amazon RDS, Amazon Aurora, Amazon S3 representing the usual suspects
  • Google BigQuery, MySQL, PostgreSQL because variety is the spice of data life
  • Salesforce, Snowflake, and other third-party sources for when you need to make friends with other clouds
  • Notably absent: MongoDB, still working on that “roadmap”

The interface provides a comprehensive list of data source options, making it easy to connect to various databases and cloud services like a dating app for data, but with better matching algorithms.

Step 2: Configure Athena Connection


Source Configuration

In the “New Amazon Athena data source” dialog, which represents another instance of AWS’s favorite communication method:

  1. Data source name: Enter descriptive name like “demo” while resisting the urge to name it “mongodb-workaround-attempt-47”
  2. Athena workgroup: Select workgroup from dropdown, typically “[primary]” because being primary is important
  3. Connection validation: Verify “Validated” status appears, providing the green checkmark of modest triumph
  4. Security: Ensure “SSL is enabled” for secure connections because data security is always in style
  5. Click “Create data source” to proceed and commit to the architectural relationship

This step took me exactly 2 minutes to configure and 6 hours to reach, after exploring every other possible connection method.

Step 3: Select Database and Table

In the “Choose your table” interface, which functions like browsing a menu but for databases:

  1. Catalog: Select data catalog like “dev-data-source” representing your data’s address book
  2. Database: Choose target database such as “dev” where the interesting stuff lives
  3. Tables: Select from available tables:
  • dev-orders which serves as the protagonist of our story
  • dev-payment-status functioning as the reliable sidekick
  • dev-mailchimp-templates because even data needs newsletters

Query Options

  • Select table: Use predefined tables directly via the path of least resistance, which is perfectly valid
  • Use custom SQL: Write custom queries for complex requirements when you want to show off those SQL skills
  • Edit/Preview data: Review structure before proceeding using the “look before you leap” approach

Step 4: Configure Dataset Settings - The Real-Time Data Dilemma

Import Strategy Decision

In the “Finish dataset creation” dialog, we reached a critical architectural decision point:

Data Import Options:

  • Import to SPICE for quicker analytics :high_voltage: The logical choice for performance
  • Directly query your data The path we ultimately chose, despite the performance implications

The Client Reality Check:
At this point, the client delivered the classic requirement that changes everything: “We need near real-time data. Our users make decisions based on current information, and a few minutes delay could impact business outcomes.”

This is where SPICE, despite being nice and fast, became architecturally incompatible with our requirements. SPICE operates on scheduled refreshes, which meant our data would always be at least somewhat stale. The client’s emphasis on “near real-time” meant we had to embrace Direct Query mode, even though we knew it would come with painful loading times.

Configuration Details after accepting our fate:

  • Table: “dev-orders” serving as our star performer
  • Data source: “demo” keeping the naming simple
  • Schema: “dev” representing developer’s paradise
  • Query Mode: Direct Query selected with full knowledge of the performance implications

The Performance Trade-off:
Direct Query mode means every dashboard interaction triggers a live query to Athena, which then queries the underlying data in S3. This creates a chain of dependencies where performance is only as good as the slowest link. Users would experience loading times that could range from acceptable to “is this thing broken?” depending on query complexity and data volume.

Additional Settings:

  • :check_box_with_check: Email owners when a refresh fails because nobody likes surprise failures at 3 AM, though with Direct Query this applies more to connection issues than refresh failures

Step 5: Review and Prepare Data

The dataset editor provides comprehensive data management like a Swiss Army knife for people who overthink data transformations:

Field Management:

  • Complete field list with data types representing the cast of characters in our data transformation saga
  • Sample data preview for validation offering proof that the MongoDB-to-Athena pipeline actually worked
  • Data quality indicators and statistics providing the health metrics that would have saved hours of debugging
  • Manual data type override options for when automated inference needs human intervention

Key Data Fields formerly MongoDB documents, now respectable columns:

  • _id, email_address, payment_intent forming the holy trinity of e-commerce
  • client_secret, status, payment_status handling state management across platforms
  • amount as Integer and payment as String where money talks and strings provide context
  • application_amount as Decimal and currency because precision matters in financial data
  • seller_email, shipping_address, etc. comprising the supporting cast

Step 6: Create Analysis and Visualizations

The analysis builder provides three main panels designed with the logical efficiency that should exist everywhere:

Left Panel - Data & Visuals:

  • Dataset: “dev-orders” with Direct Query mode, accepting the performance trade-offs for data freshness
  • Field List: All available dimensions and measures forming your analytical toolkit
  • Visual Types: Various chart options representing the greatest hits of data visualization
  • AutoGraph: Automatic chart selection for when decision fatigue hits

Center Canvas:

  • Drag-and-drop visualization workspace that’s surprisingly intuitive
  • Real-time chart updates providing instant gratification after hours of data pipeline debugging, though “instant” is relative with Direct Query
  • Multiple sheet support because one perspective is never sufficient

Building Your First Visualization

Count of Amount by Payment" pie chart because after all that setup, even a simple pie chart feels like victory:

  1. Select Fields: Drag payment to Group/Color field well
  2. Add Measure: Drag amount to Values field well
  3. Chart Configuration:
  • Y Axis: payment as categorical
  • Value: amount with Count aggregation
  • Legend: Shows payment types including null, Stripe, and Paypal

The resulting visualization shows payment method distribution with interactive legend and hover details, proving that data can be both functional and aesthetically pleasing. Note that with Direct Query mode, each interaction triggers a new query to Athena, so users learn to be patient with their data exploration.

Step 7: The Row-Level Security Journey and Cost Reality Check

This is where the project took multiple unexpected detours into the fascinating world of row-level security, followed by a harsh lesson in QuickSight economics.

The Initial RLS Implementation Attempt

Row-Level Security in QuickSight initially seemed like the perfect solution for our multi-tenant data access requirements. It allows you to restrict which rows of data users can see based on their identity or assigned attributes, functioning like a bouncer for your data who checks IDs and decides what each person is allowed to see.

The Two-Collection RLS Challenge

My first implementation involved connecting two MongoDB collections: the main orders collection and a separate permissions collection containing user access rules. This required creating a data pipeline that could join these collections while maintaining the security model.

The permissions collection structure looked like this:

text

{
  "user_email": "seller@example.com",
  "allowed_seller": "seller@example.com",
  "region": "north",
  "access_level": "full"
}

The main challenge was ensuring that when the data moved from MongoDB through S3 to Athena and finally to QuickSight, the relationship between users and their permitted data remained intact and performant, especially with Direct Query mode adding latency to every security check.

The RLS Implementation Process (That We Eventually Abandoned)

Setting up row-level security required several steps that each demanded considerable patience:

Step 1: Permissions Dataset Creation
I created a separate dataset containing the mapping between users and their allowed data. This dataset included columns for user identification and the corresponding values they’re permitted to see.

Step 2: Data Type Constraints
RLS in QuickSight only works with string-based fields, which meant converting some of our numeric identifiers to string format during the ETL process. This constraint wasn’t immediately obvious and required architectural adjustments.

Step 3: Dataset Configuration
In QuickSight, I configured row-level security by:

  • Selecting “Set up” under Row-level security
  • Choosing “User-based rules”
  • Mapping the permissions dataset to the main dataset
  • Defining which columns should be used for matching

Step 4: User Management Hell
Here’s where reality hit hard. For RLS to work properly, each user needed to be registered in QuickSight with appropriate permissions. This meant:

  • Creating QuickSight users for each business user
  • Managing user lifecycles and access changes
  • Paying QuickSight user fees for each person who needed access

The Cost Revelation

After implementing RLS successfully and celebrating the security model, the Devops Engineer delivered the reality check that changed everything. The cost of maintaining individual QuickSight users for our user base was substantially higher than anticipated. With users ranging from $24/month for Authors to varying Reader costs depending on usage patterns, the monthly bill was becoming a significant line item.

The calculation was sobering:

  • 50 users × $24/month = $1,200/month minimum
  • Additional costs for Reader access and usage
  • Administrative overhead for user management
  • Security complexity of maintaining QuickSight user accounts

The Filter-Based Solution Pivot

Faced with budget constraints, we made a pragmatic architectural decision: abandon RLS in favor of parameter-based filtering using a single QuickSight user account. This approach trades some security granularity for significant cost savings and architectural simplicity.

Create Advanced Filtering - The New Approach

Setup where cost considerations meet security requirements:

  1. Click “Add” to create new filters opening the gateway to parameter-based security
  2. Configure filter conditions:
  • Filter type: Custom filter, equals, contains
  • Applied to: Select specific visuals or all visuals
  • Cross-sheet: Apply across multiple analysis sheets

Instead of relying on QuickSight’s built-in RLS, we implemented security through application-level filtering where our backend determines which data each user should see and passes appropriate filter parameters.

Parameter Creation for Single-User Architecture

Parameter Interface serving as the control mechanism for cost-effective data access:

  1. Name: “SellerEmail” which remains descriptive and security-relevant
  2. Data type: String, not alterable after creation, so choose wisely
  3. Values: Single value selected for simplicity
  4. Static default value: Enter placeholder
  5. Dynamic default: Configure data-driven defaults

Applying the parameter to the Dashboard:

  • Use #p.parameterName format, in our case: #p.SellerEmail=seller@example.com
  • This parameter gets appended to the analysis URL by our backend application
  • When applied, the dashboard dynamically filters to show only relevant data
  • The filtering happens at query time, ensuring real-time security enforcement
  • All users share the same QuickSight account, with security enforced at the application level

This approach required accepting certain security trade-offs:

Benefits:

  • Dramatically reduced QuickSight costs from hundreds per month to a single user fee
  • Simplified user management with no QuickSight user lifecycle to maintain
  • Faster implementation with fewer moving parts
  • Application-level security control matching our existing authentication patterns

Considerations:

  • Security enforcement moved from QuickSight to our application layer
  • All users technically share the same QuickSight session
  • Requires careful backend implementation to ensure proper parameter passing
  • Audit trails need to be maintained at the application level rather than QuickSight level

The filter-based approach proved to be a pragmatic solution that balanced security requirements with cost constraints, though it required maintaining vigilance in our application-level security implementation.

Step 8: The Embedding Architecture Decision

At this point in the project, the requirements expanded with the seemingly simple request: “we need to embed these dashboards in our application.” This led to another architectural decision point that required extensive research and consultation.

Understanding Embedding Options

After extensive research and multiple solution architect conversations, two primary embedding approaches emerged:

Registered User Embedding representing the VIP experience:

  • Full QuickSight functionality with user-specific permissions and bookmarks
  • Higher cost per user but comprehensive features
  • Complex user management but powerful capabilities

Anonymous User Embedding offering the democratic approach:

  • Read-only dashboard access with session-based pricing
  • Simplified user experience but limited functionality
  • Cost-effective for large user bases

Given our recent cost-consciousness awakening from the RLS experience, we chose registered user embedding primarily based on budget decisions and our simplified single-user architecture. The cost was predictable and manageable with only one QuickSight user account needed.

Step 9: Dashboard Publishing

This section represents the culmination of the entire architectural journey where analysis transforms into actionable insight:

  • Publish dashboard dialog representing the moment of truth
  • Dashboard naming and configuration for branding the final product
  • Sheet selection options to curate the user experience
  • Generative capabilities settings for AI-enhanced insights
  • Version control and notes providing documentation for future maintainers

Step 10: Embedding Implementation - The Cost-Conscious Serverless Architecture

After extensive cost analysis discussions informed by our RLS experience, we settled on registered user embedding using a single QuickSight account. This approach provided full functionality while maintaining cost predictability.

Backend Architecture: Lambda and API Gateway

Rather than exposing QuickSight API calls directly from the frontend, we implemented a serverless backend architecture using AWS Lambda and API Gateway. This approach provided better security, centralized token management, and clean separation of concerns while working with our single-user QuickSight architecture.

Lambda Function for URL Generation

python

import boto3
import json
import os
from datetime import datetime, timedelta

def lambda_handler(event, context):
    """
    Lambda function to generate QuickSight embedded URLs for registered users
    Uses a single QuickSight user account with parameter-based filtering
    """
    
    # Extract user information from the request
    user_email = event['requestContext']['authorizer']['claims']['email']
    dashboard_id = event['pathParameters']['dashboard_id']
    
    # Initialize QuickSight client
    quicksight_client = boto3.client('quicksight', region_name='us-east-1')
    
    try:
        # Generate the embed URL using our single QuickSight user account
        response = quicksight_client.generate_embed_url_for_registered_user(
            AwsAccountId=os.environ['AWS_ACCOUNT_ID'],
            ExperienceConfiguration={
                'Dashboard': {
                    'InitialDashboardId': dashboard_id,
                    'FeatureConfigurations': {
                        'Bookmarks': {'Enabled': True},
                        'StatePersistence': {'Enabled': True}
                    }
                }
            },
            SessionLifetimeInMinutes=600,  # 10 hours - reasonable for business hours
            # Single user account for all embedded sessions
            UserArn=f"arn:aws:quicksight:us-east-1:{os.environ['AWS_ACCOUNT_ID']}:user/default/{os.environ['QS_EMBED_USER']}",
            AllowedDomains=[os.environ['ALLOWED_DOMAIN']]  # Our application domain
        )
        
        # Add user-specific parameters to the embed URL for filtering
        embed_url_with_params = f"{response['EmbedUrl']}#p.SellerEmail={user_email}"
        
        return {
            'statusCode': 200,
            'headers': {
                'Access-Control-Allow-Origin': os.environ['ALLOWED_DOMAIN'],
                'Access-Control-Allow-Headers': 'Content-Type,X-Amz-Date,Authorization,X-Api-Key',
                'Content-Type': 'application/json'
            },
            'body': json.dumps({
                'embedUrl': embed_url_with_params,
                'requestId': response['RequestId']
            })
        }
        
    except Exception as e:
        print(f"Error generating embed URL: {str(e)}")
        return {
            'statusCode': 500,
            'headers': {
                'Access-Control-Allow-Origin': os.environ['ALLOWED_DOMAIN'],
                'Content-Type': 'application/json'
            },
            'body': json.dumps({
                'error': 'Failed to generate embed URL',
                'message': str(e)
            })
        }

API Gateway Configuration

The API Gateway setup included:

  • Authentication: Integrated with our existing user authentication system
  • CORS Configuration: Properly configured for our frontend domain
  • Rate Limiting: Implemented to prevent abuse and manage costs
  • Caching: Disabled for embed URLs since they’re time-sensitive and user-specific

API Gateway endpoint structure:

POST /api/v1/dashboards/{dashboard_id}/embed
Authorization: Bearer {jwt_token}

Frontend

We chose iframe instead of calling the the apis from frontend for Obvious reaosns

Why iframe Over SDK?

The decision to use iframe instead of the QuickSight JavaScript SDK was based on several practical considerations:

Simplicity: The iframe approach required significantly less code and complexity. After spending weeks wrestling with data pipelines, security configurations, and cost optimizations, simplicity became a virtue.

Reliability: iframe embedding has been stable and well-tested across browsers for decades. The SDK, while feature-rich, introduced additional potential failure points in an already complex architecture.

Performance: For registered users, the iframe loads the full QuickSight interface efficiently. The SDK’s additional abstractions didn’t provide measurable performance benefits, and with Direct Query mode already impacting performance, we didn’t want additional overhead.

Maintenance: Fewer dependencies meant less maintenance overhead. The iframe approach required no version management or SDK updates, which was particularly appealing given our resource constraints.

Security Considerations in Single-User Architecture

The serverless architecture provided several security benefits while working within our cost-conscious single-user model:

Token Management: Lambda functions handled QuickSight API credentials securely without exposing them to the frontend.

Domain Validation: The AllowedDomains parameter in the Lambda function ensured dashboards could only be embedded in our authorized application.

User Context: API Gateway authentication ensured that embed URLs were generated only for authenticated users, with proper user context passed for parameter-based filtering.

Time-Limited URLs: QuickSight embed URLs expire after 5 minutes, limiting the impact of potential URL exposure.

Parameter-Based Security: User-specific parameters appended to URLs ensured that even with shared QuickSight sessions, users only saw their authorized data.

Cost Optimization Achieved

The single-user registered approach provided significant cost benefits compared to our initial RLS implementation:

Predictable Costs:

  • Single QuickSight user account: $24/month (Author) vs. $24+ per user previously
  • No per-user charges for multiple QuickSight accounts
  • Lambda and API Gateway costs were minimal for our usage patterns

Resource Utilization:

  • Better utilization of QuickSight resources with shared sessions
  • No wasted capacity from inactive user accounts
  • Simplified billing and cost tracking

Operational Savings:

  • No QuickSight user lifecycle management
  • Reduced complexity in access control
  • Simplified audit and compliance requirements

The Lambda-based URL generation added minimal cost overhead while providing architectural benefits that justified the small additional expense.

Best Practices Learned Through Experience

Performance Optimization with Direct Query

  • Query Optimization: Essential for maintaining sanity when every dashboard interaction hits Athena
  • Data Partitioning: Became critical for reasonable query performance with large datasets
  • User Expectation Management: Training users to expect longer load times for real-time data
  • Caching Strategies: Implementing application-level caching where possible without compromising data freshness

Cost Considerations

  • Single-User Architecture: The pragmatic solution that balanced functionality with budget constraints
  • Query Monitoring: Essential for managing Athena costs with frequent direct queries
  • Resource Right-Sizing: Optimizing Lambda memory and timeout settings for cost efficiency
  • Usage Analytics: Tracking dashboard usage to justify ongoing costs

Security Implementation

  • Parameter-Based Filtering: The cost-effective alternative to row-level security that actually worked
  • Application-Level Security: Moving security enforcement to our application layer where we had full control
  • Audit Logging: Maintaining security audit trails at the application level
  • Session Management: Careful handling of shared QuickSight sessions

Dashboard Design for Direct Query

  • Loading Indicators: Essential for user experience with slower query times
  • Query Complexity: Keeping visualizations simple to maintain acceptable performance
  • Error Handling: Robust error handling for query timeouts and failures
  • Progressive Loading: Designing dashboards that load incrementally when possible

Troubleshooting: A Compilation of Learning Experiences

Direct Query Performance Issues

  • Monitor Athena query performance and optimize SQL when possible
  • Implement query result caching at the Athena level where appropriate
  • Consider data partitioning strategies to improve query speed
  • Set realistic user expectations about loading times for real-time data

Cost Management Challenges

  • Track QuickSight, Lambda, API Gateway, and Athena costs separately
  • Monitor parameter-based filtering effectiveness to ensure security without cost impact
  • Implement usage analytics to justify ongoing investment
  • Regular cost reviews to catch unexpected increases

Security Validation

  • Regular testing of parameter-based filtering to ensure data isolation
  • Audit logging of all dashboard access for compliance requirements
  • Monitoring for parameter manipulation attempts
  • Validation of user context passing through the entire pipeline

Epilogue: Architectural Lessons Learned

This guide represents the distilled wisdom from what began as a “simple MongoDB-to-QuickSight connection” and evolved into a comprehensive exploration of AWS data visualization architecture under real-world constraints. The journey included multiple architectural pivots from MongoDB through S3 to Athena and finally QuickSight, extensive security research where row-level security proved too expensive for our budget, cost-conscious embedding implementation using single-user architecture, performance acceptance with Direct Query mode for real-time data requirements, and the realization that sometimes pragmatic solutions trump theoretical perfection.

The result is a cost-effective, maintainable, and secure data visualization solution that handles enterprise-grade requirements while respecting budget constraints. Sometimes the best architectural decisions are driven by economic realities rather than technical idealism, and that’s perfectly acceptable in production environments.

MongoDB still isn’t directly supported in QuickSight as of this writing, but our workaround became a more flexible, more cost-effective, and more maintainable solution than a direct connection would have been. The parameter-based filtering approach, while requiring more application-level implementation, provided better cost control and security transparency than QuickSight’s native RLS.

The iframe-based embedding approach proved robust in production, handling thousands of daily dashboard loads with minimal maintenance requirements while keeping costs predictable. Sometimes the straightforward solution really is the best solution, especially when budgets are involved.

Direct Query mode, despite its performance challenges, delivered the real-time data requirements our client needed. Users learned to accept longer loading times as the price of current information, and the business value of real-time insights justified the performance trade-offs.

This comprehensive guide provides the foundation for creating powerful, interactive dashboards in Amazon QuickSight using Amazon Athena as your data source, enabling real-time analytics and business intelligence capabilities for your organization while maintaining cost consciousness and architectural pragmatism.

3 Likes