AWS Amplify Gen 2 doesn’t include the @search directive that many developers relied on in earlier versions, leaving teams scrambling for alternative search solutions. This guide walks you through building a custom AWS serverless search architecture using DynamoDB streams Lambda functions and OpenSearch AWS integration to replicate advanced search functionality.
This tutorial is designed for AWS developers and full-stack engineers who need to implement sophisticated search capabilities in their Amplify Gen 2 applications without the built-in @search directive.
We’ll cover how to set up DynamoDB streams for real-time data capture, create Lambda functions that automatically sync your database changes to OpenSearch, and implement a robust search indexing system that keeps your search results current. You’ll also learn to optimize performance and handle common synchronization challenges in your AWS serverless search solution.
Understanding AWS Amplify Gen 2 Search Limitations and Required Architecture
Identify missing @search directive functionality in Gen 2
AWS Amplify Gen 2 search represents a significant departure from its predecessor, introducing several limitations that catch developers off guard. The beloved @search directive, which previously enabled automatic indexing and search functionality through Amazon OpenSearch Service, has been removed from the core framework. This change means that GraphQL schemas can no longer leverage simple annotations to create powerful search experiences.
The missing functionality includes automatic field indexing, real-time search synchronization, and built-in query operators for complex searches. Developers accustomed to effortlessly adding searchable fields to their data models now face the challenge of implementing custom search solutions from scratch. The streamlined approach that made Amplify Gen 1 so appealing for rapid development has given way to a more modular but complex architecture.
Compare Gen 1 search capabilities with Gen 2 constraints
Feature | AWS Amplify Gen 1 | AWS Amplify Gen 2 |
---|---|---|
@search directive | Fully supported with automatic OpenSearch integration | Not available |
Automatic indexing | Built-in field indexing and mapping | Manual implementation required |
Real-time sync | Automatic DynamoDB to OpenSearch synchronization | Custom DynamoDB streams setup needed |
Query complexity | Advanced filtering and search operators | Basic DynamoDB queries only |
Setup complexity | Single annotation implementation | Multi-service architecture required |
Search latency | Near real-time search results | Depends on custom implementation |
The contrast becomes stark when examining practical implementation. Gen 1 developers could add @searchable
to a GraphQL field and immediately gain access to sophisticated search capabilities. Gen 2 requires orchestrating multiple AWS services, including DynamoDB streams Lambda functions, and OpenSearch integration to achieve similar functionality.
Design alternative search architecture using DynamoDB Streams
Building an effective AWS Amplify Gen 2 search alternative demands a carefully orchestrated serverless search architecture. The foundation begins with DynamoDB as the primary data store, leveraging its stream capabilities to capture real-time data changes. Every create, update, and delete operation triggers stream events that feed into downstream processing systems.
The architecture centers around a three-tier approach:
- Data Layer: DynamoDB tables with streams enabled capture all data modifications
- Processing Layer: Lambda functions consume stream events and transform data for search indexing
- Search Layer: OpenSearch Service provides the search engine capabilities with custom indexes
This DynamoDB streams real-time processing approach ensures data consistency while maintaining the performance characteristics developers expect. The serverless nature keeps costs predictable and scales automatically with usage patterns.
Map OpenSearch integration requirements
OpenSearch AWS integration within this custom architecture requires careful planning across several dimensions. Index design must accommodate the specific search patterns your application demands, while data transformation logic ensures DynamoDB records map correctly to OpenSearch documents.
Key integration requirements include:
- Index Schema Design: Define OpenSearch mappings that support your search use cases
- Data Transformation: Convert DynamoDB attribute formats to OpenSearch-compatible documents
- Batch Processing: Handle bulk operations efficiently to minimize OpenSearch API calls
- Error Handling: Implement retry logic and dead letter queues for failed indexing operations
- Security Configuration: Establish proper IAM roles and policies for cross-service communication
The Lambda OpenSearch synchronization component becomes critical for maintaining data integrity. Your functions must handle various DynamoDB stream event types while ensuring idempotent operations that won’t corrupt search indexes during retry scenarios.
Performance optimization requires understanding OpenSearch cluster sizing, shard distribution, and refresh intervals. Unlike the automatic scaling provided by Amplify Gen 1’s integrated approach, this AWS serverless search solution demands active monitoring and tuning to deliver optimal search experiences.
Setting Up DynamoDB Streams for Real-Time Data Capture
Enable DynamoDB Streams on your data tables
Getting DynamoDB streams up and running starts with enabling them on your existing tables. If you’re working with AWS Amplify Gen 2, you’ll need to modify your data schema to include stream configuration. The process varies slightly depending on whether you’re adding streams to new tables or updating existing ones.
For new tables, add the stream configuration directly in your Amplify schema:
export const schema = defineSchema({
Todo: a.model({
content: a.string(),
isDone: a.boolean(),
}).authorization(allow => [allow.owner()]),
});
Then configure the stream in your amplify/backend.ts
:
import { defineBackend } from '@aws-amplify/backend';
import { data } from './data/resource';
const backend = defineBackend({
data,
});
// Enable DynamoDB streams
backend.data.resources.tables["Todo"].streamSpecification = {
streamEnabled: true,
streamViewType: "NEW_AND_OLD_IMAGES"
};
For existing tables, you’ll need to update the stream settings through the AWS Console or AWS CLI. The CLI approach gives you more control:
aws dynamodb update-table \
--table-name YourTableName \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
Configure stream view types for optimal data tracking
The stream view type you choose determines what data gets captured when items change in your DynamoDB table. This choice directly impacts your Lambda function’s ability to process changes and update OpenSearch indexes effectively.
Here’s what each view type captures:
View Type | What’s Captured | Best Use Case |
---|---|---|
KEYS_ONLY |
Primary key attributes only | Simple change notifications |
NEW_IMAGE |
Entire item after modification | Create and update operations |
OLD_IMAGE |
Entire item before modification | Delete and audit operations |
NEW_AND_OLD_IMAGES |
Full before and after images | Complete synchronization |
For OpenSearch integration, NEW_AND_OLD_IMAGES
works best because it gives your Lambda function complete context about what changed. Your function can:
- Compare old and new values to determine what fields changed
- Handle deletions by accessing the old image when the new image is empty
- Implement complex business logic based on field-level changes
- Maintain data consistency between DynamoDB and OpenSearch
When you need to optimize costs and reduce stream record size, consider NEW_IMAGE
if you’re primarily handling creates and updates. However, you’ll lose the ability to easily detect deletions or implement sophisticated change detection logic.
Verify stream activation and data flow monitoring
After enabling streams, verification becomes critical for ensuring your AWS Amplify custom search implementation works reliably. Start by checking the stream status through the AWS Console or programmatically.
Use the AWS CLI to verify stream activation:
aws dynamodb describe-table --table-name YourTableName --query 'Table.StreamSpecification'
The response should show:
{
"StreamEnabled": true,
"StreamViewType": "NEW_AND_OLD_IMAGES"
}
Monitor stream health using CloudWatch metrics. Key metrics to track include:
IncomingRecords
: Number of stream records createdIteratorAge
: How far behind stream processing is runningReadProvisionedThroughputExceeded
: Stream throttling events
Set up CloudWatch alarms for these metrics to catch issues early:
aws cloudwatch put-metric-alarm \
--alarm-name "DynamoDB-Stream-Iterator-Age" \
--alarm-description "Monitor stream processing lag" \
--metric-name IteratorAge \
--namespace AWS/DynamoDB \
--statistic Maximum \
--period 300 \
--threshold 30000 \
--comparison-operator GreaterThanThreshold
Test the data flow by making changes to your DynamoDB table and monitoring the stream records. You can use the AWS Console’s stream viewer or write a simple test script to consume stream records and verify they contain expected data for your DynamoDB streams real-time processing pipeline.
Creating Lambda Functions to Process DynamoDB Stream Events
Build Lambda function to capture stream records
Creating a robust Lambda function for DynamoDB streams real-time processing starts with setting up the basic handler structure. Your function needs to process batches of stream records efficiently while maintaining data integrity throughout the synchronization process.
import json
import boto3
from aws_lambda_powertools import Logger
logger = Logger()
opensearch_client = boto3.client('opensearchserverless')
def lambda_handler(event, context):
try:
for record in event['Records']:
event_name = record['eventName']
if event_name in ['INSERT', 'MODIFY']:
process_upsert_record(record)
elif event_name == 'REMOVE':
process_delete_record(record)
except Exception as e:
logger.error(f"Error processing stream records: {str(e)}")
raise
The stream record structure contains crucial metadata including the event type, sequence number, and both old and new images of your DynamoDB items. Your Lambda function should extract this information systematically to determine the appropriate action for OpenSearch synchronization.
Configure your Lambda function with appropriate memory allocation (typically 512MB-1GB) and timeout settings (5-15 minutes depending on batch size). Enable dead letter queues to capture failed processing attempts and set up CloudWatch alarms for monitoring function performance.
Transform DynamoDB data format for OpenSearch compatibility
DynamoDB’s native format includes type descriptors that OpenSearch doesn’t understand directly. You’ll need to flatten these structures and convert them into standard JSON objects that work seamlessly with your OpenSearch index schema.
def transform_dynamodb_item(dynamodb_item):
"""Convert DynamoDB item format to OpenSearch-compatible JSON"""
transformed = {}
for key, value in dynamodb_item.items():
if 'S' in value: # String
transformed[key] = value['S']
elif 'N' in value: # Number
transformed[key] = float(value['N'])
elif 'BOOL' in value: # Boolean
transformed[key] = value['BOOL']
elif 'L' in value: # List
transformed[key] = [transform_attribute_value(item) for item in value['L']]
elif 'M' in value: # Map
transformed[key] = transform_dynamodb_item(value['M'])
elif 'SS' in value: # String Set
transformed[key] = value['SS']
elif 'NS' in value: # Number Set
transformed[key] = [float(n) for n in value['NS']]
return transformed
Pay special attention to handling nested objects and arrays, as these require recursive processing. Your transformation logic should also handle edge cases like empty strings, null values, and mixed data types within sets. Consider implementing field mapping if your OpenSearch schema uses different field names than your DynamoDB table.
Implement error handling and retry mechanisms
Robust error handling prevents data loss and ensures your AWS Amplify custom search implementation remains reliable even when facing temporary service disruptions or unexpected data formats.
import time
import random
from botocore.exceptions import ClientError
def retry_with_exponential_backoff(func, max_retries=3, base_delay=1):
"""Implement exponential backoff for OpenSearch operations"""
for attempt in range(max_retries):
try:
return func()
except ClientError as e:
error_code = e.response['Error']['Code']
if error_code in ['ThrottlingException', 'ServiceUnavailable']:
if attempt < max_retries - 1:
delay = base_delay * (2 ** attempt) + random.uniform(0, 1)
logger.warning(f"Retrying after {delay:.2f}s, attempt {attempt + 1}")
time.sleep(delay)
else:
logger.error(f"Max retries exceeded: {str(e)}")
raise
else:
logger.error(f"Non-retryable error: {str(e)}")
raise
Implement different retry strategies based on error types. Throttling errors should trigger exponential backoff, while authentication or permission errors should fail immediately. Store failed records in a separate DynamoDB table or SQS dead letter queue for manual review and reprocessing.
Create detailed logging that captures the original DynamoDB record, transformation results, and error details. This information proves invaluable when troubleshooting synchronization issues in your Lambda OpenSearch synchronization workflow.
Configure appropriate IAM permissions for Lambda execution
Your Lambda function requires carefully scoped IAM permissions to access DynamoDB streams, write to OpenSearch, and perform logging operations. Avoid overly broad permissions that could create security vulnerabilities in your AWS serverless search solution.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Resource": "arn:aws:dynamodb:region:account:table/YourTable/stream/*"
},
{
"Effect": "Allow",
"Action": [
"es:ESHttpPost",
"es:ESHttpPut",
"es:ESHttpDelete"
],
"Resource": "arn:aws:es:region:account:domain/your-opensearch-domain/*"
}
]
}
Grant specific permissions for CloudWatch logging, including logs:CreateLogGroup
, logs:CreateLogStream
, and logs:PutLogEvents
. If using AWS X-Ray for tracing, add appropriate X-Ray permissions to your policy.
Consider using resource-based policies on your OpenSearch domain to allow Lambda function access. This approach often provides better security isolation than adding broad OpenSearch permissions to your Lambda execution role. Test your permissions thoroughly in a development environment before deploying to production, ensuring your function can handle all required operations without unnecessary privileges.
Integrating OpenSearch Service for Advanced Search Capabilities
Set up OpenSearch domain with proper security configurations
Creating an OpenSearch domain requires careful attention to security and network configurations. Start by choosing between a public or VPC-based deployment. VPC deployments offer better security isolation, which works perfectly for our AWS Amplify custom search implementation.
When configuring your domain, select instance types based on your expected data volume and query load. t3.small instances work well for development, while m6g.large or larger instances handle production workloads better. Enable encryption at rest and in transit to protect sensitive data.
Access policies need special consideration for Lambda OpenSearch synchronization. Create an IAM role that allows your Lambda functions to perform indexing operations while restricting unauthorized access. The policy should include permissions for es:ESHttpPost
, es:ESHttpPut
, and es:ESHttpDelete
actions on your domain.
{
"Version": "2012-10-17",
"Statement": [
",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT:role/lambda-opensearch-role"
},
"Action": "es:*",
"Resource": "arn:aws:es:region:ACCOUNT:domain/your-domain/*"
}
]
}
Create search indices matching your data structure
Index creation should mirror your DynamoDB table structure while optimizing for search patterns. Each index represents a searchable collection that corresponds to your data model.
Define your index mappings before inserting data. This prevents OpenSearch from auto-generating suboptimal field types. For example, if your DynamoDB table contains user profiles, create an index with appropriate field mappings:
{
"mappings": {
"properties": {
"userId": { "type": "keyword" },
"username": {
"type": "text",
"fields": {
"keyword": { "type": "keyword" }
}
},
"email": { "type": "keyword" },
"createdAt": { "type": "date" },
"tags": { "type": "keyword" }
}
}
}
Consider creating separate indices for different entity types rather than mixing everything into one index. This approach improves query performance and makes data management easier.
Establish secure connection between Lambda and OpenSearch
Your Lambda functions need secure, reliable connections to OpenSearch. Using AWS SDK v3 with the OpenSearch JavaScript client provides the most robust integration.
Install the required dependencies in your Lambda deployment package:
npm install @opensearch-project/opensearch @aws-sdk/client-opensearch
Configure the client with AWS authentication:
const { Client } = require('@opensearch-project/opensearch');
const { defaultProvider } = require('@aws-sdk/credential-provider-node');
const aws4 = require('aws4');
const client = new Client({
node: process.env.OPENSEARCH_ENDPOINT,
Connection: class extends Connection {
buildRequestObject(params) {
const request = super.buildRequestObject(params);
return aws4.sign(request, {
service: 'es',
region: process.env.AWS_REGION
});
}
}
});
Set up proper error handling and retry logic for network issues. OpenSearch connections can occasionally fail, so implement exponential backoff strategies.
Configure index mapping for optimal search performance
Index mappings determine how OpenSearch stores and searches your data. Poor mapping choices create performance bottlenecks that become expensive to fix later.
Use keyword
types for exact matches and filtering operations. Apply text
types for full-text search capabilities. Date fields need proper formatting to enable time-based queries and aggregations.
Create multi-field mappings when you need both exact matching and full-text search on the same field:
Field Type | Use Case | Example |
---|---|---|
text | Full-text search | Product descriptions, comments |
keyword | Exact matching, aggregations | User IDs, status codes |
date | Time-based queries | Creation timestamps, expiry dates |
nested | Complex objects | Address information, metadata |
Configure index settings for your specific workload. Increase number_of_shards
for larger datasets, but avoid over-sharding smaller indices. Set number_of_replicas
based on your availability requirements.
{
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"analysis": {
"analyzer": {
"custom_analyzer": {
"tokenizer": "standard",
"filter": ["lowercase", "asciifolding"]
}
}
}
}
}
Custom analyzers improve search relevance for specific use cases. The example above handles accented characters and case-insensitive matching, which works well for user-generated content in your AWS serverless search solution.
Implementing Data Synchronization and Search Indexing
Process INSERT operations to add new documents to OpenSearch
When new items are added to your DynamoDB table, the stream event captures the complete data structure in the dynamodb.NewImage
field. Your Lambda function needs to transform this DynamoDB format into a clean JSON document that OpenSearch can index effectively.
Start by extracting the relevant fields from the DynamoDB item and converting them from DynamoDB’s native format. The AWS SDK provides utilities to simplify this conversion process. Create a mapping function that transforms DynamoDB attribute values into standard JavaScript objects, removing the type descriptors that DynamoDB includes.
const transformDynamoDBItem = (item) => {
const transformed = {};
Object.keys(item).forEach(key => {
const value = item[key];
if (value.S) transformed[key] = value.S;
else if (value.N) transformed[key] = parseFloat(value.N);
else if (value.BOOL) transformed[key] = value.BOOL;
// Add other type conversions as needed
});
return transformed;
};
After transforming the data, use the OpenSearch client to index the document with a PUT request to your OpenSearch domain. Include the DynamoDB item’s primary key as the document ID to maintain consistency between both systems.
Handle UPDATE events to maintain data consistency
UPDATE operations in DynamoDB streams provide both the old and new versions of the modified item through OldImage
and NewImage
fields. This gives you complete visibility into what changed, allowing you to make intelligent decisions about how to update the corresponding OpenSearch document.
Compare the old and new images to identify which fields actually changed. This prevents unnecessary updates to OpenSearch and reduces processing overhead. Some fields might be more critical for search functionality than others, so you can prioritize updates based on field importance.
const getChangedFields = (oldImage, newImage) => {
const changes = {};
const allKeys = new Set([...Object.keys(oldImage || {}), ...Object.keys(newImage || {})]);
allKeys.forEach(key => {
const oldValue = oldImage?.[key];
const newValue = newImage?.[key];
if (JSON.stringify(oldValue) !== JSON.stringify(newValue)) {
changes[key] = newValue;
}
});
return changes;
};
Use partial updates in OpenSearch when only specific fields have changed. This approach is more efficient than replacing the entire document and maintains better performance as your dataset grows.
Manage DELETE operations to remove outdated search entries
DELETE events in DynamoDB streams only contain the OldImage
since the item no longer exists in the table. Extract the primary key from the deleted item and use it to remove the corresponding document from OpenSearch.
Implement proper error handling for delete operations because the document might not exist in OpenSearch for various reasons – perhaps it failed to sync initially, or was already deleted in a previous operation. Your Lambda function should handle these scenarios gracefully without failing the entire batch.
const handleDelete = async (record, opensearchClient) => {
const deletedItem = record.dynamodb.OldImage;
const documentId = extractPrimaryKey(deletedItem);
try {
await opensearchClient.delete({
index: 'your-index-name',
id: documentId
});
} catch (error) {
if (error.statusCode === 404) {
console.log(`Document ${documentId} not found in OpenSearch`);
} else {
throw error;
}
}
};
Consider implementing soft deletes if your application requires maintaining search history or audit trails. Instead of removing documents entirely, you can add a deleted
flag and filter these items in your search queries.
Implement bulk operations for improved performance
DynamoDB streams process records in batches, and OpenSearch supports bulk operations that can significantly improve performance when processing multiple changes simultaneously. Group your operations by type (index, update, delete) and send them as bulk requests to OpenSearch.
The bulk API in OpenSearch accepts an array of operations in a specific format. Each operation consists of an action header followed by the document body (except for delete operations which only need the header).
const processBulkOperations = async (records, opensearchClient) => {
const bulkBody = [];
records.forEach(record => {
const eventName = record.eventName;
const documentId = extractPrimaryKey(record.dynamodb);
if (eventName === 'INSERT' || eventName === 'MODIFY') {
bulkBody.push({
index: {
_index: 'your-index-name',
_id: documentId
}
});
bulkBody.push(transformDynamoDBItem(record.dynamodb.NewImage));
} else if (eventName === 'REMOVE') {
bulkBody.push({
delete: {
_index: 'your-index-name',
_id: documentId
}
});
}
});
if (bulkBody.length > 0) {
const response = await opensearchClient.bulk({ body: bulkBody });
handleBulkResponse(response);
}
};
Monitor the bulk response for any failed operations and implement retry logic with exponential backoff for transient failures. This ensures your DynamoDB to OpenSearch data sync remains reliable even when OpenSearch experiences temporary issues or capacity constraints.
Set appropriate batch sizes based on your document sizes and OpenSearch cluster capacity. Start with smaller batches (50-100 documents) and scale up based on performance testing results.
Testing and Optimizing Your Custom Search Implementation
Validate Real-Time Synchronization Between DynamoDB and OpenSearch
Testing your DynamoDB streams Lambda synchronization requires a systematic approach to catch potential data inconsistencies. Start by creating test records in your DynamoDB table and monitor how quickly they appear in your OpenSearch index. Use AWS CloudWatch to track the Lambda execution time and verify that stream records are processed within acceptable latency windows.
Create a validation script that compares data between DynamoDB and OpenSearch at regular intervals. This script should check for missing documents, outdated records, and field mapping inconsistencies. Set up automated tests that insert, update, and delete records while monitoring the sync process in real-time.
Pay special attention to edge cases like rapid consecutive updates to the same record, bulk operations, and network timeout scenarios. Your Lambda function should handle DynamoDB stream record deduplication properly, especially when dealing with multiple record versions in a single batch.
Test Search Functionality Across Different Query Types
Your AWS Amplify custom search implementation needs thorough testing across various query patterns. Create comprehensive test suites that cover exact matches, partial text searches, range queries, and complex boolean operations. Test each search type your application requires, from simple keyword searches to advanced filtering combinations.
Build test datasets that include edge cases like special characters, Unicode text, empty fields, and extremely long strings. Verify that your OpenSearch mapping handles these scenarios correctly and returns expected results. Create automated tests for common user search patterns, including misspellings and partial matches.
Test performance under different data volumes by gradually increasing your test dataset size. Monitor query response times and identify potential bottlenecks in your search architecture. Document which query types perform best and establish baseline metrics for comparison.
Monitor Performance Metrics and Optimize Lambda Execution
AWS Lambda performance monitoring is crucial for maintaining efficient DynamoDB streams real-time processing. Track key metrics including execution duration, memory utilization, error rates, and cold start frequency. Use CloudWatch dashboards to visualize these metrics and set up alerts for performance degradation.
Optimize your Lambda function by adjusting memory allocation based on actual usage patterns. Higher memory often reduces execution time, but balance this against cost considerations. Implement connection pooling for OpenSearch clients to reduce initialization overhead across function invocations.
Consider implementing batch processing for multiple stream records to improve throughput. However, balance batch size against latency requirements – larger batches improve efficiency but may increase processing delays. Monitor DynamoDB stream shard utilization and scale Lambda concurrency accordingly.
Metric | Target Value | Alert Threshold |
---|---|---|
Lambda Duration | < 10 seconds | > 15 seconds |
Error Rate | < 1% | > 2% |
Cold Starts | < 5% | > 10% |
OpenSearch Latency | < 100ms | > 500ms |
Implement Comprehensive Error Logging and Alerting
Robust error handling ensures your AWS serverless search solution maintains reliability under various failure scenarios. Implement structured logging throughout your Lambda function, capturing detailed context for each processing step. Use AWS CloudWatch Logs Insights to query and analyze error patterns effectively.
Set up different alert levels for various error types. Critical alerts should trigger for complete processing failures or data corruption scenarios, while warning alerts can notify about temporary connectivity issues or performance degradation. Configure SNS topics to route alerts to appropriate team members based on severity.
Create dead letter queues for failed stream processing events, allowing you to replay problematic records after fixing underlying issues. Implement retry logic with exponential backoff for transient failures, but ensure infinite retry loops don’t occur. Log correlation IDs to trace individual records through the entire processing pipeline.
Monitor your DynamoDB to OpenSearch data sync health with custom CloudWatch metrics. Track successful sync rates, processing delays, and data consistency checks. Build dashboards that provide real-time visibility into your search infrastructure health, enabling proactive issue resolution before users experience problems.
Building a custom search solution for AWS Amplify Gen 2 might seem complex at first, but breaking it down into these core components makes it manageable. By setting up DynamoDB Streams to capture data changes, creating Lambda functions to process those events, and integrating OpenSearch for powerful search capabilities, you create a robust search system that rivals the original @search directive functionality. The key is getting your data synchronization right and making sure your search indexes stay up-to-date with your application data.
Start with a simple implementation and gradually add more advanced features as you become comfortable with the architecture. Test your Lambda functions thoroughly, monitor your OpenSearch performance, and don’t forget to optimize your queries for better user experience. This approach gives you more control over your search functionality while working within Gen 2’s current limitations. Once you have this foundation in place, you’ll find it easier to customize and extend your search capabilities to meet your specific application needs.