|

|  How to Integrate Microsoft Azure Cognitive Services with AWS Lambda

How to Integrate Microsoft Azure Cognitive Services with AWS Lambda

January 24, 2025

Learn how to seamlessly connect Azure Cognitive Services with AWS Lambda for enhanced AI capabilities in your applications with our easy-to-follow guide.

How to Connect Microsoft Azure Cognitive Services to AWS Lambda: a Simple Guide

 

Set Up Azure Cognitive Services

 

  • Log in to your Microsoft Azure account.
  •  

  • Navigate to the Azure Portal and click on "Create a resource".
  •  

  • Search for "Cognitive Services" and select the desired service (e.g., Text Analytics, Computer Vision).
  •  

  • Click "Create" and fill out the necessary details such as subscription, resource group, and pricing tier.
  •  

  • Once deployed, navigate to the resource and locate the "Keys and Endpoint" section. Note the API key and endpoint URL.

 

Set Up AWS Lambda Function

 

  • Log in to your AWS Management Console.
  •  

  • Navigate to the Lambda service and click "Create function".
  •  

  • Select "Author from scratch" and configure the function with a name, runtime (e.g., Node.js, Python), and permissions.
  •  

  • Create or select an existing IAM role with necessary permissions for logs and other AWS services you will use.

 

Integrate Azure Cognitive Services Within AWS Lambda

 

  • Prepare your Lambda function code to make HTTP requests. You can use libraries like `axios` for Node.js or `requests` for Python.
  •  

  • Add your Azure Cognitive Services endpoint and API key in the code for authentication and to make requests.

 

import json
import requests

def lambda_handler(event, context):
    url = "YOUR_AZURE_COGNITIVE_SERVICE_ENDPOINT"
    api_key = "YOUR_AZURE_API_KEY"
    headers = {"Ocp-Apim-Subscription-Key": api_key, "Content-Type": "application/json"}
    
    body = json.dumps({"documents": [{"id": "1", "language": "en", "text": "Hello world"}]})
    
    response = requests.post(url, headers=headers, data=body)
    
    return {
        'statusCode': 200,
        'body': json.dumps(response.json())
    }

 

const axios = require('axios');
exports.handler = async (event) => {
    const url = "YOUR_AZURE_COGNITIVE_SERVICE_ENDPOINT";
    const api_key = "YOUR_AZURE_API_KEY";
    
    try {
        const response = await axios.post(
            url,
            { documents: [{ id: "1", language: "en", text: "Hello world" }] },
            { headers: { 'Ocp-Apim-Subscription-Key': api_key, 'Content-Type': 'application/json' } }
        );
        
        return {
            statusCode: 200,
            body: JSON.stringify(response.data)
        };
    } catch (error) {
        return {
            statusCode: error.response.status,
            body: error.message
        };
    }
};

 

Configure Environment Variables in AWS Lambda

 

  • Avoid hardcoding sensitive information, such as API keys, in the Lambda function. Use environment variables instead.
  •  

  • Navigate to your Lambda function configuration page, and in the "Environment variables" section, add new key-value pairs for your Azure endpoint and API keys.
  •  

  • Modify your Lambda function to fetch these variables using `process.env` in Node.js or `os.environ` in Python.

 

Test Your Lambda Function

 

  • Inside the AWS Lambda console, create a new test event. Use the required format based on how your function is expected to trigger.
  •  

  • Invoke the function and check the output logs to ensure that data is being processed correctly and that the Azure Cognitive Service is returning expected values.
  •  

  • If errors occur, review the Lambda logs and modify code or configurations as necessary.

 

Deploy and Monitor

 

  • Once your function is working correctly, set your Lambda function to trigger on specific AWS services (e.g., S3 file uploads, API Gateway).
  •  

  • Enable detailed monitoring and logging to watch the function's usage and performance.
  •  

  • Set up CloudWatch alarms for notifications on failures or performance issues to ensure smooth operation and timely intervention.

 

Omi Necklace

The #1 Open Source AI necklace: Experiment with how you capture and manage conversations.

Build and test with your own Omi Dev Kit 2.

How to Use Microsoft Azure Cognitive Services with AWS Lambda: Usecases

 

Intelligent Image Moderation with Azure Cognitive Services and AWS Lambda

 

  • Leverage Microsoft Azure Cognitive Services to analyze and moderate images uploaded by users for inappropriate content. Use Azure's advanced algorithms to detect adult content, gory images, weapons, and more.
  •  

  • Once an image is uploaded to an S3 bucket on AWS, it triggers an AWS Lambda function. This function is responsible for calling the Azure Cognitive Services API to perform image analysis and collect moderation data.
  •  

  • Store the analysis results in a DynamoDB table. This allows for easy querying and reporting on the moderation status of images. Additionally, flag images that require further manual review.
  •  

  • Use AWS SNS to send notifications to administrators when potentially inappropriate content is detected by the Azure service, ensuring prompt action can be taken.
  •  

  • React with another Lambda function if images pass moderation. The images can be moved to another S3 bucket that serves content to a web application or a mobile app, ensuring only safe content is displayed to users.

 


import boto3
import requests

def lambda_handler(event, context):
    # Access image from S3
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']
    s3_client = boto3.client('s3')
    image_data = s3_client.get_object(Bucket=bucket, Key=key)['Body'].read()

    # Call Azure Cognitive Services API
    endpoint = "https://<your_azure_endpoint>/vision/v3.2/analyze"
    headers = {'Ocp-Apim-Subscription-Key': '<your_azure_subscription_key>',
               'Content-Type': 'application/octet-stream'}
    params = {'visualFeatures': 'Adult'}
    response = requests.post(endpoint, headers=headers, params=params, data=image_data)
    moderation_result = response.json()

    # Store results in DynamoDB
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table('<your_dynamodb_table>')
    table.put_item(Item={
        'ImageKey': key,
        'ModerationResult': moderation_result
    })

    # Notify admin if inappropriate content detected
    if moderation_result['adult']['isAdultContent']:
        sns_client = boto3.client('sns')
        sns_client.publish(
            TopicArn='<your_sns_topic>',
            Subject='Inappropriate Content Alert',
            Message=f"Image {key} contains inappropriate content."
        )

    # Further actions such as moving image to a public bucket for approved content
    # can be implemented here

    return {
        'statusCode': 200,
        'body': 'Image processed successfully'
    }

 

 

Intelligent Text Translation and Summarization System

 

  • Utilize Microsoft Azure Cognitive Services for natural language processing to translate and summarize large text documents, aiding in multilingual understanding and information extraction.
  •  

  • Store documents in an S3 bucket on AWS. When a new document is uploaded, an AWS Lambda function is triggered to process the text extraction and send the content to Azure's Translation and Text Analytics APIs.
  •  

  • The translated text along with its summarized version is stored in an Amazon DynamoDB table, creating a structured repository for quick access and reference.
  •  

  • Employ Amazon SNS to notify specific users or systems when new translations or summaries are available, enhancing workflow automation and information dissemination.
  •  

  • Create a real-time feedback loop by setting up another AWS Lambda function to analyze user feedback stored in a separate S3 bucket and adjust parameters like translation quality thresholds or summary lengths dynamically.

 


import boto3
import requests

def lambda_handler(event, context):
    # Access document from S3
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']
    s3_client = boto3.client('s3')
    document_data = s3_client.get_object(Bucket=bucket, Key=key)['Body'].read().decode('utf-8')

    # Call Azure Translation API
    translate_endpoint = "https://api.cognitive.microsofttranslator.com/translate"
    translate_headers = {'Ocp-Apim-Subscription-Key': '<your_azure_subscription_key>',
                         'Content-Type': 'application/json'}
    translate_params = {'api-version': '3.0', 'to': 'es'}  # Translate to Spanish
    translate_response = requests.post(translate_endpoint, headers=translate_headers, params=translate_params, json=[{'Text': document_data}])
    translation_result = translate_response.json()[0]['translations'][0]['text']

    # Call Azure Text Analytics for summarization
    summarize_endpoint = "https://<your_azure_endpoint>/text/analytics/v3.0/summarize"
    summarize_headers = {'Ocp-Apim-Subscription-Key': '<your_azure_subscription_key>',
                         'Content-Type': 'application/json'}
    summarize_response = requests.post(summarize_endpoint, headers=summarize_headers, json={'documents': [{'id': '1', 'text': translation_result}]})
    summary_result = summarize_response.json()['documents'][0]['summarizedText']

    # Store results in DynamoDB
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table('<your_dynamodb_table>')
    table.put_item(Item={
        'DocumentKey': key,
        'Translation': translation_result,
        'Summary': summary_result
    })

    # Notify interested parties via SNS
    sns_client = boto3.client('sns')
    sns_client.publish(
        TopicArn='<your_sns_topic>',
        Subject='New Translation and Summary Available',
        Message=f"Document {key} has been translated and summarized."
    )

    return {
        'statusCode': 200,
        'body': 'Document translation and summarization processed successfully'
    }

 

Omi App

Fully Open-Source AI wearable app: build and use reminders, meeting summaries, task suggestions and more. All in one simple app.

Github →

Order Friend Dev Kit

Open-source AI wearable
Build using the power of recall

Order Now

Troubleshooting Microsoft Azure Cognitive Services and AWS Lambda Integration

How to call Azure Cognitive Services API from AWS Lambda?

 

Set Up AWS Lambda

 

  • Create an AWS Lambda function in the AWS console. Choose a runtime such as Node.js or Python.
  • Adjust the execution role to include necessary permissions for accessing external APIs.

 

Install Required Libraries

 

  • Ensure your Lambda's deployment package includes an HTTP client library like `axios` for Node.js or `requests` for Python to make HTTP requests.

 

Write Code to Access Azure API

 

  • Obtain an API key and endpoint from the Azure portal under your Cognitive Services resource.
  • Set up environment variables for secure storage of sensitive information like the Azure API key.
// Node.js example
const axios = require('axios');

exports.handler = async (event) => {
  const endpoint = process.env.AZURE_ENDPOINT;
  const apiKey = process.env.AZURE_API_KEY;

  try {
    const response = await axios.post(`${endpoint}/analyze`, event, {
      headers: { 'Ocp-Apim-Subscription-Key': apiKey }
    });
    return response.data;
  } catch (error) {
    return { error: error.message };
  }
};

 

Deploy and Test

 

  • Upload your code to Lambda or use AWS CLI for deployment.
  • Test the function to ensure proper integration.

 

Why is my Lambda function not connecting to Azure Cognitive Services?

 

Check Network Configuration

 

  • Verify Lambda's VPC setup. Ensure it has outbound internet access via NAT Gateway to connect to Azure services.
  • Confirm network security groups allow outgoing traffic on the required ports (usually port 443 for HTTPS).

 

Validate Permissions and Credentials

 

  • Ensure Azure Cognitive Services' API key is correctly stored and accessed within the Lambda, usually via environment variables or AWS Secrets Manager.
  • Permissions should include invoking Azure endpoints. Verify IAM roles if using additional AWS services for this process.

 

Test Connectivity

 

  • Use Postman or a similar tool to test the endpoint and isolate if the issue lies with Azure or AWS.
  • Debug within Lambda with logging to capture connection errors and validate the request details by adding debug logs.

 

import boto3  
import requests  

def lambda_handler(event, context):  
    url = "https://<your-cognitive-service-endpoint>"  
    headers = {"Ocp-Apim-Subscription-Key": "<api-key>"}  
    
    try:  
        response = requests.get(url, headers=headers)  
        response.raise_for_status()  
        return response.json()  

    except requests.exceptions.RequestException as e:  
        print(f"Error: {e}")  
        return {"error": str(e)}  

 

How to handle Azure Cognitive Services authentication in AWS Lambda?

 

Set Up Azure Credentials

 

  • To authenticate with Azure Cognitive Services, store API keys securely, e.g., using AWS Secrets Manager.
  •  

  • Create a secret containing your Azure API key and endpoint URL within AWS Secrets Manager.

 

Access Secrets in AWS Lambda

 

  • Attach IAM policies that allow Lambda to access AWS Secrets Manager.
  •  

  • Use the AWS SDK in your Lambda function to fetch the Azure API credentials.

 

import boto3
import os

def lambda_handler(event, context):
    secret_name = os.environ['SECRET_NAME']
    region_name = os.environ['AWS_REGION']
    
    session = boto3.session.Session()
    client = session.client(service_name='secretsmanager', region_name=region_name)
    
    secret_value = client.get_secret_value(SecretId=secret_name)
    azure_api_key = secret_value['SecretString']['API_KEY']
    azure_endpoint = secret_value['SecretString']['ENDPOINT']
    
    return {"api_key": azure_api_key, "endpoint": azure_endpoint}

 

Invoke Azure Cognitive Service

 

  • Use the fetched credentials to make requests to Azure services.
  •  

  • Handle any potential errors in communication or authentication.

 

import requests

def call_azure_service(api_key, endpoint, data):
    headers = {'Ocp-Apim-Subscription-Key': api_key}
    response = requests.post(endpoint, headers=headers, json=data)
    
    if response.status_code == 200:
        return response.json()
    else:
        response.raise_for_status()

 

Don’t let questions slow you down—experience true productivity with the AI Necklace. With Omi, you can have the power of AI wherever you go—summarize ideas, get reminders, and prep for your next project effortlessly.

Order Now

Join the #1 open-source AI wearable community

Build faster and better with 3900+ community members on Omi Discord

Participate in hackathons to expand the Omi platform and win prizes

Participate in hackathons to expand the Omi platform and win prizes

Get cash bounties, free Omi devices and priority access by taking part in community activities

Join our Discord → 

OMI NECKLACE + OMI APP
First & only open-source AI wearable platform

a person looks into the phone with an app for AI Necklace, looking at notes Friend AI Wearable recorded a person looks into the phone with an app for AI Necklace, looking at notes Friend AI Wearable recorded
a person looks into the phone with an app for AI Necklace, looking at notes Friend AI Wearable recorded a person looks into the phone with an app for AI Necklace, looking at notes Friend AI Wearable recorded
online meeting with AI Wearable, showcasing how it works and helps online meeting with AI Wearable, showcasing how it works and helps
online meeting with AI Wearable, showcasing how it works and helps online meeting with AI Wearable, showcasing how it works and helps
App for Friend AI Necklace, showing notes and topics AI Necklace recorded App for Friend AI Necklace, showing notes and topics AI Necklace recorded
App for Friend AI Necklace, showing notes and topics AI Necklace recorded App for Friend AI Necklace, showing notes and topics AI Necklace recorded

OMI NECKLACE: DEV KIT
Order your Omi Dev Kit 2 now and create your use cases

Omi Dev Kit 2

Endless customization

OMI DEV KIT 2

$69.99

Make your life more fun with your AI wearable clone. It gives you thoughts, personalized feedback and becomes your second brain to discuss your thoughts and feelings. Available on iOS and Android.

Your Omi will seamlessly sync with your existing omi persona, giving you a full clone of yourself – with limitless potential for use cases:

  • Real-time conversation transcription and processing;
  • Develop your own use cases for fun and productivity;
  • Hundreds of community apps to make use of your Omi Persona and conversations.

Learn more

Omi Dev Kit 2: build at a new level

Key Specs

OMI DEV KIT

OMI DEV KIT 2

Microphone

Yes

Yes

Battery

4 days (250mAH)

2 days (250mAH)

On-board memory (works without phone)

No

Yes

Speaker

No

Yes

Programmable button

No

Yes

Estimated Delivery 

-

1 week

What people say

“Helping with MEMORY,

COMMUNICATION

with business/life partner,

capturing IDEAS, and solving for

a hearing CHALLENGE."

Nathan Sudds

“I wish I had this device

last summer

to RECORD

A CONVERSATION."

Chris Y.

“Fixed my ADHD and

helped me stay

organized."

David Nigh

OMI NECKLACE: DEV KIT
Take your brain to the next level

LATEST NEWS
Follow and be first in the know

Latest news
FOLLOW AND BE FIRST IN THE KNOW

thought to action

team@basedhardware.com

company

careers

events

invest

privacy

products

omi

omi dev kit

personas

resources

apps

bounties

affiliate

docs

github

help