Coming Soon!
Example:
Name | GitHub Handle | Contribution |
---|---|---|
Taylor Nguyen | @taylornguyen | Data exploration, visualization, overall project coordination |
Jordan Ramirez | @jramirez | Data collection, exploratory data analysis (EDA), dataset documentation |
Amina Hassan | @aminahassan | Data preprocessing, feature engineering, data validation |
Priya Mehta | @pmehta | Model selection, hyperparameter tuning, model training and optimization |
Chris Park | @chrispark | Model evaluation, performance analysis, results interpretation |
Example:
- Developed a machine learning model using
[model type/technique]
to address[challenge project task]
. - Achieved
[key metric or result]
, demonstrating[value or impact]
for[host company]
. - Generated actionable insights to inform business decisions at
[host company or stakeholders]
. - Implemented
[specific methodology]
to address industry constraints or expectations.
Provide step-by-step instructions so someone else can run your code and reproduce your results. Depending on your setup, include:
- How to clone the repository
- How to install dependencies
- How to set up the environment
- How to access the dataset(s)
- How to run the notebook or scripts
Describe:
- How this project is connected to the Break Through Tech AI Program
- Your AI Studio host company and the project objective and scope
- The real-world significance of the problem and the potential impact of your work
You might consider describing the following (as applicable):
- The dataset(s) used: origin, format, size, type of data
- Data exploration and preprocessing approaches
- Insights from your Exploratory Data Analysis (EDA)
- Challenges and assumptions when working with the dataset(s)
Potential visualizations to include:
- Plots, charts, heatmaps, feature visualizations, sample dataset images
You might consider describing the following (as applicable):
- Model(s) used (e.g., CNN with transfer learning, regression models)
- Feature selection and Hyperparameter tuning strategies
- Training setup (e.g., % of data for training/validation, evaluation metric, baseline performance)
You might consider describing the following (as applicable):
- Performance metrics (e.g., Accuracy, F1 score, RMSE)
- How your model performed
- Insights from evaluating model fairness
Potential visualizations to include:
- Confusion matrix, precision-recall curve, feature importance plot, prediction distribution, outputs from fairness or explainability tools
You might consider addressing the following (as applicable):
- What are some of the limitations of your model?
- What would you do differently with more time/resources?
- What additional datasets or techniques would you explore?
If applicable, indicate how your project can be used by others by specifying and linking to an open source license type (e.g., MIT, Apache 2.0). Make sure your Challenge Advisor approves of the selected license type.
Example: This project is licensed under the MIT License.
Cite relevant papers, articles, or resources that supported your project.
Thank your Challenge Advisor, host company representatives, TA, and others who supported your project.