It will generate captions according to the given images. For example:
app.pyMain code to run to create the servergenerate_captions.pyPython module that compiles the AI model and makes predictions.embedding_matrix.pklMatrix for the word embeddings of the vocabulary.train_descriptions.pklDictionary to map image names to the captions for training data.word_to_index.pklDictionary to map words in the vocabulary to their index numbers.index_to_word.pklDictionary to map index number to their words in the vocabulary.resultsContains samples of results on testing.staticStores images input by the user while generating captions.templatesContains the<index.html>to generate the UI.preparing_data.ipynbJupyter Notebook to prepare the data for training.training_model.ipynbJupyter Notebook to train the model.generate_captions.ipynbJupyter Notebook to import all the essentials and generate the captions.
model_weightsFolder that contains all the models generated in 40 epochs during the training.(Link:https://drive.google.com/open?id=1EzkEjTSQAAlKejyJAwRfKbKqBtDNHwv7glove.6B.50d.txtText file to contain mapping of words to their corresponding 50-dimensional vector. (Link:https://drive.google.com/open?id=1mqHRTOyF87fHoiuRZwOlgcYwcCynQ5Kiencoding_train_features.pklDictionary to map training images to their corresponding 2048 dimensional vector. (Link:https://drive.google.com/open?id=1qO4fgm8qUu0eIslMpg6oqqmcZil5qs9kflickr30k_imagesTraining images and their captions. (Link:https://www.kaggle.com/hsankesara/flickr-image-dataset)
-
Clone the repository.
git clone https://www.github.com/parask11/image-captioner -
Go in the directory.
cd image-captioner -
Install requirements.
pip install -r requirements.txt
Run the python script.
python app.py
It will start a server.
Open the link from the browser.
localhost:5000
The UI will appear. Upload images and generate the captions!


