A simple deep learning library for estimating a set of tags and extracting semantic feature vectors from given illustrations.

Overview

Illustration2Vec

illustration2vec (i2v) is a simple library for estimating a set of tags and extracting semantic feature vectors from given illustrations. For details, please see our main paper.

Requirements

  • Pre-trained models (i2v uses Convolutional Neural Networks. Please download several pre-trained models from here, or execute get_models.sh in this repository).
  • numpy and scipy
  • PIL (Python Imaging Library) or its alternatives (e.g., Pillow)
  • skimage (Image processing library for python)

In addition to the above libraries and the pre-trained models, i2v requires either caffe or chainer library. If you are not familiar with deep learning libraries, we recommend to use chainer that can be installed via pip command.

How to use

In this section, we show two simple examples -- tag prediction and the the feature vector extraction -- by using the following illustration [1].

slide

[1] Hatsune Miku (初音ミク), © Crypton Future Media, INC., http://piapro.net/en_for_creators.html. This image is licensed under the Creative Commons - Attribution-NonCommercial, 3.0 Unported (CC BY-NC).

Tag prediction

i2v estimates a number of semantic tags from given illustrations in the following manner.

import i2v
from PIL import Image

illust2vec = i2v.make_i2v_with_chainer(
    "illust2vec_tag_ver200.caffemodel", "tag_list.json")

# In the case of caffe, please use i2v.make_i2v_with_caffe instead:
# illust2vec = i2v.make_i2v_with_caffe(
#     "illust2vec_tag.prototxt", "illust2vec_tag_ver200.caffemodel",
#     "tag_list.json")

img = Image.open("images/miku.jpg")
illust2vec.estimate_plausible_tags([img], threshold=0.5)

estimate_plausible_tags() returns dictionaries that have a pair of tag and its confidence.

[{'character': [(u'hatsune miku', 0.9999994039535522)],
  'copyright': [(u'vocaloid', 0.9999998807907104)],
  'general': [(u'thighhighs', 0.9956372380256653),
   (u'1girl', 0.9873462319374084),
   (u'twintails', 0.9812833666801453),
   (u'solo', 0.9632901549339294),
   (u'aqua hair', 0.9167950749397278),
   (u'long hair', 0.8817108273506165),
   (u'very long hair', 0.8326570987701416),
   (u'detached sleeves', 0.7448858618736267),
   (u'skirt', 0.6780789494514465),
   (u'necktie', 0.5608364939689636),
   (u'aqua eyes', 0.5527772307395935)],
  'rating': [(u'safe', 0.9785731434822083),
   (u'questionable', 0.020535090938210487),
   (u'explicit', 0.0006299660308286548)]}]

These tags are classified into the following four categories: general tags representing general attributes included in an image, copyright tags representing the specific name of the copyright, character tags representing the specific name of the characters, and rating tags representing X ratings.

If you want to focus on several specific tags, use estimate_specific_tags() instead.

[{'1girl': 0.9873462319374084, 'blue eyes': 0.01301183458417654, 'safe': 0.9785731434822083}]">
illust2vec.estimate_specific_tags([img], ["1girl", "blue eyes", "safe"])
# -> [{'1girl': 0.9873462319374084, 'blue eyes': 0.01301183458417654, 'safe': 0.9785731434822083}]

Feature vector extraction

i2v can extract a semantic feature vector from an illustration.

import i2v
from PIL import Image

# In the feature vector extraction, you do not need to specify the tag.
illust2vec = i2v.make_i2v_with_chainer("illust2vec_ver200.caffemodel")

# illust2vec = i2v.make_i2v_with_caffe(
#     "illust2vec.prototxt", "illust2vec_ver200.caffemodel")

img = Image.open("images/miku.jpg")

# extract a 4,096-dimensional feature vector
result_real = illust2vec.extract_feature([img])
print("shape: {}, dtype: {}".format(result_real.shape, result_real.dtype))
print(result_real)

# i2v also supports a 4,096-bit binary feature vector
result_binary = illust2vec.extract_binary_feature([img])
print("shape: {}, dtype: {}".format(result_binary.shape, result_binary.dtype))
print(result_binary)

The output is the following:

shape: (1, 4096), dtype: float32
[[ 7.47459459  3.68610668  0.5379501  ..., -0.14564702  2.71820974
   7.31408596]]
shape: (1, 512), dtype: uint8
[[246 215  87 107 249 190 101  32 187  18 124  90  57 233 245 243 245  54
  229  47 188 147 161 149 149 232  59 217 117 112 243  78  78  39  71  45
  235  53  49  77  49 211  93 136 235  22 150 195 131 172 141 253 220 104
  163 220 110  30  59 182 252 253  70 178 148 152 119 239 167 226 202  58
  179 198  67 117 226  13 204 246 215 163  45 150 158  21 244 214 245 251
  124 155  86 250 183  96 182  90 199  56  31 111 123 123 190  79 247  99
   89 233  61 105  58  13 215 159 198  92 121  39 170 223  79 245  83 143
  175 229 119 127 194 217 207 242  27 251 226  38 204 217 125 175 215 165
  251 197 234  94 221 188 147 247 143 247 124 230 239  34  47 195  36  39
  111 244  43 166 118  15  81 177   7  56 132  50 239 134  78 207 232 188
  194 122 169 215 124 152 187 150  14  45 245  27 198 120 146 108 120 250
  199 178  22  86 175 102   6 237 111 254 214 107 219  37 102 104 255 226
  206 172  75 109 239 189 211  48 105  62 199 238 211 254 255 228 178 189
  116  86 135 224   6 253  98  54 252 168  62  23 163 177 255  58  84 173
  156  84  95 205 140  33 176 150 210 231 221  32  43 201  73 126   4 127
  190 123 115 154 223  79 229 123 241 154  94 250   8 236  76 175 253 247
  240 191 120 174 116 229  37 117 222 214 232 175 255 176 154 207 135 183
  158 136 189  84 155  20  64  76 201  28 109  79 141 188  21 222  71 197
  228 155  94  47 137 250  91 195 201 235 249 255 176 245 112 228 207 229
  111 232 157   6 216 228  55 153 202 249 164  76  65 184 191 188 175  83
  231 174 158  45 128  61 246 191 210 189 120 110 198 126  98 227  94 127
  104 214  77 237  91 235 249  11 246 247  30 152  19 118 142 223   9 245
  196 249 255   0 113   2 115 149 196  59 157 117 252 190 120  93 213  77
  222 215  43 223 222 106 138 251  68 213 163  57  54 252 177 250 172  27
   92 115 104 231  54 240 231  74  60 247  23 242 238 176 136 188  23 165
  118  10 197 183  89 199 220  95 231  61 214  49  19  85  93  41 199  21
  254  28 205 181 118 153 170 155 187  60  90 148 189 218 187 172  95 182
  250 255 147 137 157 225 127 127  42  55 191 114  45 238 228 222  53  94
   42 181  38 254 177 232 150  99]]

License

The pre-trained models and the other files we have provided are licensed under the MIT License.

Comments
  • KeyError: 'conv6_4'

    KeyError: 'conv6_4'

    Hi,

    On Mac, I downloaded the models, but I can't load them. Seems like the loaded model does not have any layers, even if I use an old version of chainer (< 2.0).

    Is it a Mac issue?

    opened by jilljenn 3
  • Some advice about license compliance

    Some advice about license compliance

    Hello, such a nice repository benefits me a lot and so kind of you to make it open source!

    Question There’s some possible legal issues on the license of your repository when you combine numerous third-party packages. For instance, numpy and scipy you imported are licensed with BSD License and BSD License, respectively. However, the MIT License of your repository are less strict than above package licenses, which has violated the whole license compatibility in your repository and may bring legal and financial risks.

    Advice You can select another proper license for your repository, or write a custom license with license exception if some license terms couldn’t be summed up consistently.

    Best wishes!

    opened by Ashley123456789 0
  • Why does the probability of the label of the same picture change?

    Why does the probability of the label of the same picture change?

    I found that every time I estimate the label probability of the same image, the result is different.

    That is, for the same picture (for example, the picture you provided miku.jpg), I ran the program twice and the results were

    image

    image

    Is this phenomenon normal? And why this phenomenon occurs? (maybe I was not careful enough and did not find the random function in the program)

    Thanks for this project.

    opened by xiaobanni 0
  • KeyError: 'encode1'

    KeyError: 'encode1'

    I can run the estimate_plausible_tags function just fine- the model seems to be working, but calling extract_feature or extract_binary_feature returns KeyError: 'encode1'

    looking in to it a little, it seems the error was generated as the output of : feature = self._extract(imgs, layername='encode1') and in the illust2vec model, the layers are only iterations of convX_Y, reluX_Y, and poolX. Passing actual layer names doesn't seem to get rid of the error or produce any results so I'm lost with this... On windows, chainer is v5.2

    opened by Vullan 1
  • hydrus support

    hydrus support

    This feature is only for python3.

    It is tested on python 3.6.5 on ubuntu 18.04.

    i2v can run a local server by doing the following:

    • put illust2vec_tag_ver200.caffemodel and tag_list.json on current working command
    • to run on on 127.0.0.1 host and port 5011 run following command:
      $ i2v run -h 127.0.0.1 -p 5011
    
    opened by johndpope 2
Releases(v2.0.0)
Owner
Masaki Saito
Researcher @ Preferred Networks inc, Japan
Masaki Saito
Text-cli - Command line tool for extracting text from images using Apple's Vision framework

text-cli Command line tool for extracting text from images using Apple's Vision

San Francisco International Airport Museum 9 Aug 29, 2022
DL4S provides a high-level API for many accelerated operations common in neural networks and deep learning.

DL4S provides a high-level API for many accelerated operations common in neural networks and deep learning. It furthermore has automatic differentiati

DL4S Team 2 Dec 5, 2021
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.8k Dec 30, 2022
Automatic spoken language identification (LID) using deep learning.

iLID Automatic spoken language identification (LID) using deep learning. Motivation We wanted to classify the spoken language within audio files, a pr

Thomas Werkmeister 85 Apr 3, 2022
This is an open-source project for the aesthetic evaluation of images based on the deep learning-caffe framework, which we completed in the Victory team of Besti.

This is an open-source project for the aesthetic evaluation of images based on the deep learning-caffe framework, which we completed in the Victory team of Besti.

The Victory Group of Besti 102 Dec 15, 2022
Shallow and Deep Convolutional Networks for Saliency Prediction

Shallow and Deep Convolutional Networks for Saliency Prediction Paper accepted at 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVP

Image Processing Group - BarcelonaTECH - UPC 183 Jan 5, 2023
The source code of 'Visual Attribute Transfer through Deep Image Analogy'.

Deep Image Analogy The major contributors of this repository include Jing Liao, Yuan Yao, Lu Yuan, Gang Hua and Sing Bing Kang at Microsoft Research.

MSRA CVer 1.4k Jan 6, 2023
Automatic colorization using deep neural networks. Colorful Image Colorization. In ECCV, 2016.

Colorful Image Colorization [Project Page] Richard Zhang, Phillip Isola, Alexei A. Efros. In ECCV, 2016. + automatic colorization functionality for Re

Richard Zhang 3k Dec 27, 2022
MLKit is a simple machine learning framework written in Swift.

MLKit (a.k.a Machine Learning Kit) ?? MLKit is a simple machine learning framework written in Swift. Currently MLKit features machine learning algorit

Guled 152 Nov 17, 2022
The Swift machine learning library.

Swift AI is a high-performance deep learning library written entirely in Swift. We currently offer support for all Apple platforms, with Linux support

Swift AI 5.9k Jan 2, 2023
Artificial intelligence/machine learning data structures and Swift algorithms for future iOS development. bayes theorem, neural networks, and more AI.

Swift Brain The first neural network / machine learning library written in Swift. This is a project for AI algorithms in Swift for iOS and OS X develo

Vishal 331 Oct 14, 2022
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 | 한국어 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow ?? Transformers provides thousands of pretrained models

Hugging Face 77.1k Dec 31, 2022
Generate sniglets with machine learning

Give Me A Sniglet! Give Me a Sniglet is a random word-like generator with an on-device machine learning model that validates whether the word is likel

Marquis Kurt 4 Mar 3, 2022
This repo contains beginner examples to advanced in swift. Aim to create this for learning native iOS development.

iOS-learning-with-swift-22 This repo contains beginner examples to advanced in swift. Aim to create this for learning native iOS development. Oh, you

Umesh Jangid 0 Jan 9, 2022
Mobile-ios-ml - SBB Mobile Machine Learning for iOS devices

ESTA library: Machine Learning for iOS This framework simplifies the integration

Swiss Federal Railways (SBB) 9 Jul 16, 2022
Scutil - The swift version of my ASOC scutilUtil application. An interesting learning excercise

scutil this is the swift version of my ASOC "scutilUtil" application. An interes

null 1 Feb 15, 2022
Conjugar is an app for learning Spanish verb conjugations.

Introduction Conjugar is an iPhone™ app for learning Spanish verb conjugations. Conjugar conjugates most Spanish verbs, regular and irregular, in all

Josh Adams 34 Oct 5, 2022
Jitsi Meet - Secure, Simple and Scalable Video Conferences that you use as a standalone app or embed in your web application.

Jitsi Meet is a set of Open Source projects which empower users to use and deploy video conferencing platforms with state-of-the-art video quality and features.

Jitsi 19.1k Jan 5, 2023
Matft is Numpy-like library in Swift. Function name and usage is similar to Numpy.

Numpy-like library in swift. (Multi-dimensional Array, ndarray, matrix and vector library)

null 80 Dec 21, 2022