Convert spoken requirements into structured Kiro specifications using AI transcription and LLM processing.
This Streamlit application captures audio input (microphone or file upload), transcribes it using Amazon Transcribe, and generates structured requirements documents using Amazon Bedrock Claude 3.5 Sonnet.
🤖 Fun fact: This entire project was created using Kiro, an AI-powered development assistant that helped design, implement, and document the application from initial concept to deployment.
- 🎤 Browser microphone recording or .wav file upload
- 🔊 AI transcription via Amazon Transcribe
- ✨ Structured spec generation using Claude 3.5 Sonnet
- 📝 Automatic project creation with requirements.md files
- uv package manager installed
- AWS account with access to S3, Transcribe, and Bedrock services
- Bedrock model access for Claude 3.5 Sonnet or any other model (request via AWS Console)
- S3 bucket for audio storage
git clone https://github.com/aws-samples/sample-voice-driven-development
cd sample-voice-driven-development
uv sync
export S3_BUCKET_NAME=your-s3-bucket-name
uv run streamlit run streamlit_app.py
Using AWS credentials volume mount:
docker build -t voice-driven-dev .
docker run -p 8501:8501 \
-e S3_BUCKET_NAME=your_bucket_name \
-v ~/.aws:/home/appuser/.aws:ro \
-v $(pwd)/projects:/app/projects \
voice-driven-dev
Or with environment variables:
docker run -p 8501:8501 \
-e AWS_ACCESS_KEY_ID=your_access_key \
-e AWS_SECRET_ACCESS_KEY=your_secret_key \
-e S3_BUCKET_NAME=your_bucket_name \
-v $(pwd)/projects:/app/projects \
voice-driven-dev
Access the app at http://localhost:8501
- Record audio or upload
.wav
file with your requirements - Click process to transcribe and generate specifications
- Download the generated requirements.md file
- Find your project in the
projects/
directory or download from the web UI.
Your credentials need access to:
- S3: PutObject, GetObject on your bucket
- Transcribe: StartTranscriptionJob, GetTranscriptionJob
- Bedrock: InvokeModel for Claude 3.5 Sonnet