Skip to content

Commit

Permalink
fix: fix docs directory
Browse files Browse the repository at this point in the history
  • Loading branch information
HarshCasper committed Aug 13, 2021
1 parent 9987d89 commit f85aa3f
Show file tree
Hide file tree
Showing 11 changed files with 263 additions and 0 deletions.
20 changes: 20 additions & 0 deletions .github/workflows/gh-pages.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@

name: Publish to GH Pages

on:
repository_dispatch:
push:
branches:
- master
- gh-pages

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./docs
Empty file added docs/.nojekyll
Empty file.
12 changes: 12 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Helping Hand

Welcome to the HelpingHand Documentation!

You can find the build instructions and information on the features for the project here:

- [Build instructions for the Flutter application](build-flutter.md)
- [Build instructions for the Image Captioning backend](build-flask.md)
- [Live Image Labelling](live-image-labelling.md)
- [Currency Detection](currency-detection.md)
- [Text Extraction from Images](text-extraction.md)
- [SOS Feature](sos-feature.md)
7 changes: 7 additions & 0 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
- [Introduction](/)
- [Build instructions for the Flutter application](build-flutter.md)
- [Build instructions for the Image Captioning backend](build-flask.md)
- [Live Image Labelling](live-image-labelling.md)
- [Currency Detection](currency-detection.md)
- [Text Extraction from Images](text-extraction.md)
- [SOS Feature](sos-feature.md)
52 changes: 52 additions & 0 deletions docs/build-flask.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Image Captioning Backend

## About

The Backend API is designed and developed for the purpose of grabbing the URLs of uploaded Images from the Client-Side Interface of the Application and return a Caption of the Image according to the `Captionbot`.

## Local Installation

1. Drop a ⭐ on the Github Repository.
2. Clone the Repo by going to your local Git Client and pushing in the command:

```sh
git clone https://github.com/HarshCasper/HelpingHand.git
cd API
```

3. Make a Virtual Environment:

```sh
virtualenv env
```

4. Activate the Virtual Environment:

```sh
source ./env/bin/activate
```

5. Install the Packages:

```sh
pip install -r requirements.txt
```

6. Run the App:

```sh
python app.py
```

7. Go to ```localhost:5000``` and you are ready to consume the endpoints.

## Endpoints

| Endpoint | Description |
|---|---|
| `/` | The Root Endpoint to display a Sample Message |
| `/caption` | The Endpoint to pass an Image URL and get the Caption |

## Deployed Version

View the Deployed version [here](https://bow-flannel-food.glitch.me/) on Glitch.
29 changes: 29 additions & 0 deletions docs/build-flutter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Build instructions for the Flutter application

Once you have cloned the repository, you will find the Flutter application on the master branch.

To build it, you need to have Flutter installed. Refer to the official docs provided by Flutter over [here](https://flutter.dev/docs/get-started/install) and install it for your operating system.

Once you're done, you can open up the application in your preferred editor. The best way to go would be to use Android Studio, but you can use any other editor such as VS Code as well.

The next step would be to enter the application directory and run the following command to fetch the dependencies.

```sh
flutter pub get
```

If you're using Android Studio, you can directly use the green play button to install the app on your device or an emulator. Else, you need to run the following command:

```sh
flutter run
```

This will run the application either on a physical device you'll have connected to your system or on an emulator that has been set up.

Other useful links:

- [Download Android Studio](https://developer.android.com/studio)
- [Lab: Write your first Flutter app](https://flutter.dev/docs/get-started/codelab)
- [Cookbook: Useful Flutter samples](https://flutter.dev/docs/cookbook)
- For help getting started with Flutter, view the Flutter [online documentation](https://flutter.dev/docs), which offers tutorials,
samples, guidance on mobile development, and a full API reference.
17 changes: 17 additions & 0 deletions docs/currency-detection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Currency Detection

Our application allows user to detect currency notes as well.

We have used a custom trained tflite model trained on Indian currency. This model can be trained on more data to detect currencies of other countries as well.

## Workflow of the currency detection system

- Open the currency detection screen and tap on it.
- Click a photo of the note. Our model can classify low quality or blurred images, or folded notes as well with high accuracy.
- The model will classify the image, show a dialog along with the prediction. For better accessibility, our text-to-speech functionality will also speak out the prediction made so the user does not need to actually see the screen for the prediction made.

## Example

| ![1](https://user-images.githubusercontent.com/41234408/94897157-b2779700-04ac-11eb-8fdc-4556b87206e7.png) | ![2](https://user-images.githubusercontent.com/41234408/94897180-bb686880-04ac-11eb-8657-40aa1f360765.png) | ![3](https://user-images.githubusercontent.com/41234408/94897200-c3280d00-04ac-11eb-8f0a-9daf8d1cf15b.png) |
|---|---|---|
| Taking a picture | Prediction for a blurred 500INR Note | Prediction for a 20INR Note |
73 changes: 73 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Helping Hands</title>
<meta
name="description"
content="Documentation for Helping Hand"
/>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta name="description" content="Description" />
<link rel="icon" href="https://camo.githubusercontent.com/8c42c0356ea7f5ea50c3b4d2804abc44d9fb8851426e76a33d671498ab0d8c07/68747470733a2f2f63646e2e646973636f72646170702e636f6d2f6174746163686d656e74732f3735373631343630343037303238393535382f3736313234333636373030303036363133382f46696e616c5f48485f6c6f676f2e6a7067" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, minimum-scale=1.0"
/>
<link
rel="stylesheet"
href="//cdn.jsdelivr.net/npm/docsify@4/lib/themes/vue.css"
/>
<script src="//unpkg.com/docsify-edit-on-github/index.js"></script>

<link
rel="stylesheet"
href="//cdn.jsdelivr.net/npm/docsify-darklight-theme@latest/dist/style.min.css"
title="docsify-darklight-theme"
type="text/css"
/>
<style type="text/css">
.content {
overflow: auto;
}
:root {
--accent: var(--theme-color) !important;
}
</style>
</head>
<body class="ready sticky">
<div id="app"></div>
<script>
var repo = "https://github.com/HarshCasper/HelpingHand";
window.$docsify = {
name: "Helping Hand",
repo: repo,
loadSidebar: true,
themeColor: "#316ddb",
pagination: {
previousText: "Previous",
nextText: "Next",
},
plugins: [
EditOnGithubPlugin.create(
repo + "/blob/main/docs/",
null,
"📝 Edit on GitHub"
),
],
};
</script>
<!-- Docsify v4 -->
<script src="//cdn.jsdelivr.net/npm/docsify@4"></script>
<script src="https://unpkg.com/docsify-copy-code@2"></script>
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
<script src="//unpkg.com/docsify-pagination/dist/docsify-pagination.min.js"></script>
<script src="//unpkg.com/docsify/lib/docsify.min.js"></script>
<script src="//cdn.jsdelivr.net/npm/prismjs/components/prism-bash.min.js"></script>
<script src="//cdn.jsdelivr.net/npm/docsify-tabs@1/dist/docsify-tabs.min.js"></script>
<script
src="//cdn.jsdelivr.net/npm/docsify-darklight-theme@latest/dist/index.min.js"
type="text/javascript"
></script>
</body>
</html>
29 changes: 29 additions & 0 deletions docs/live-image-labelling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Live Image Labelling

Live Image Labelling is one of the features that capitalize on Tensorflow, Google's Machine Learning Framework that has been designed to develop and serve models even on Low-Ended Devices like Smartphones. To make this possible we are capitalizing on the capability of Transfer Learning to develop the Models real quick and to get an appreciable Performance Metric even in the lacking support for high-ended devices.

## What is Transfer Learning?

The idea and methodology behind Transfer Learning are quite simple: To use an existing pre-trained Neural Network’s knowledge on a new dataset which it has never seen before.

The idea behind Transfer Learning started with AlexNet back in 2012 where it won the ImageNet Large Scale Visual recognition challenge. After AlexNet, many other pre-trained models came into the scene which outperformed AlexNet in terms of accuracy on ImageNet Dataset.

Researchers conceptualized an idea to utilize these pre-trained models to train and develop new classifiers on a dataset that the pre-trained model has never encountered before. This technique was used to harness and transfer the learning of a previous model to a new dataset.

And surprisingly, Transfer Learning makes it quite easy for Researchers and Machine Learning Developers to train models on new datasets by just changing and modifying the last layer according to their needs.

## Usage of Transfer Learning

We are making use of the SSD Mobilenet Model from Keras and set the Trainable Layers to False so that we don’t train them from our side. After successfully modelling the Data, we can convert them to a Tflite Model that can be served using a Tflite Plugin.

The Tflite Plugin will be induced as a Dev Dependency that will be used to integrate our neural network with the app. The Tflite Plugin will be used with a `loadModel()` function which would load the Image and then use the `runModelOnFrame()` function to take the Camera Stream Input and generate the inferences henceforth.

## Screenshot

![image](https://i.imgur.com/uQ4v1nB.jpg)

## Footnotes

- [A Gentle Introduction to Transfer Learning for Deep Learning](https://machinelearningmastery.com/transfer-learning-for-deep-learning/)
- [ML for Mobile and Edge Devices](https://www.tensorflow.org/lite/guide)
- [tflite Flutter Package](https://pub.dev/packages/tflite)
9 changes: 9 additions & 0 deletions docs/sos-feature.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# SOS

One of the most important features of the application is the SOS feature. In order to use the feature, a user just needs to enter 3 phone numbers on their phone.

To access the feature immediately without much hassle, a user just needs to vigorously shake the phone for 3-4 seconds and an SOS message with the exact location details of the user would be sent to the mobile numbers the user has entered.

## SOS in action

![ezgif com-gif-maker](https://user-images.githubusercontent.com/41234408/94912972-10fd3f00-04c6-11eb-993c-80ad60afeb9b.gif)
15 changes: 15 additions & 0 deletions docs/text-extraction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Text Extraction from images

Users might sometimes need to read text present in a photo they clicked, or have saved in their gallery. Being visually impaired, it might not be easy for them to properly read the text. So we have provided a feature for users to click a photo, select existing photos, and extract all the text from them. This text is them spoken out to the user using our text-to-speech functionality.

## Workflow of the text extraction feature

- Open the Text Extraction screen and tap on it.
- Click/Select a photo.
- The photo, as well as the text extracted using OCR functionality is displayed in a dialog, with stop, replay, and pause options and is spoken out to the user as well.

## Examples

| ![photo_2020-10-02_13-05-58](https://user-images.githubusercontent.com/41234408/94899133-5e6eb180-04b0-11eb-8c83-155328065ef1.jpg) | ![photo_2020-10-02_13-06-03](https://user-images.githubusercontent.com/41234408/94899215-89f19c00-04b0-11eb-8f53-380fb0199d31.jpg) | ![photo_2020-10-02_13-05-45](https://user-images.githubusercontent.com/41234408/94899237-91b14080-04b0-11eb-9e2d-b2ddf47b7377.jpg) | ![photo_2020-10-02_13-05-54](https://user-images.githubusercontent.com/41234408/94899263-9d046c00-04b0-11eb-9b7c-f7af018067eb.jpg) |
|---|---|---|---|
| Text Extraction | Option to choose from camera or gallery | Option for stop, replay, pause | Dialog with information |

1 comment on commit f85aa3f

@vercel
Copy link

@vercel vercel bot commented on f85aa3f Aug 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.