Using OpenCV: Developed a web app to convert images to manga style

tryAtleast1Article1week (17 Part Series)

1 React + TypeScript: Face detection with Tensorflow
2 UI Components website Released!
13 more parts…
3 I made 18 UI components for all developers
4 Image Transformation: Convert pictures to add styles from famous paintings
5 Developed an app to transcribe and translate from images
6 Generate Open Graph images with Next.js and TypeScript on Vercel
7 OpenCV in Lambda: Created an API to convert images to Pokémon ASCII art style using AWS and OpenCV
8 Developed a web app to convert images to Pokémon ASCII art style
9 Using OpenCV: Developed a web app to convert images to manga style
10 TensorFlow + Next.js + TypeScript: Remove background and add virtual background image with web camera
11 Use Pose detection of TensorFlow with Next.js and TypeScript: Let’s become pictograms with Pose detection #Tokyo2020
12 Recognize facial expressions and change face to Emoji using face-api.js with Next.js+TypeScript
13 Look back on these three months: I release an app and write an article every week
14 Create a coloring book in canvas: Developed an app that you can create and enjoy your own original coloring from an image.
15 Try AR: Using AR.js to Display a wolf
16 Developed the app which trains your facial expressions: face-api.js + Next.js + TypeScript
17 Released an app to train facial expressions: Multilingual in Next.js, dynamic OGP, facial expression recognition in face-api.js

Hey guys

I have developed a web app, that converts images to Manga style.

website: https://manga.art-creator.net/
github:
frontend: https://github.com/yuikoito/manga-creator-frontend
backend: https://github.com/yuikoito/manga-backend

Like this!

※ This article is the ninth week of trying to write at least one article every week.

Usage

Visit https://manga.art-creator.net/, then upload an image whatever you want to convert manga style.

Just four step.

  • Select an image

  • Choose effects you want

Once you have selected an image, you are free to choose from a variety of effects.
You can choose to put the effects on the background of the person or on top of the image.
By default, no effect is selected, so you can change it if necessary.

If you want to add an effect to the background, you need to choose an image that has a clear boundary between the person and the background. This is because I am not using machine learning to detect people, but rather implementing contour extraction and replacing the background.

  • Click the Convert button

  • Then, wait

After selecting the effect, click the Convert button and wait a few seconds and it will be converted.
Since I don’t use machine learning this time, I think the conversion will be quite fast.

You don’t even need to log in, so have a look and enjoy freely!

Composition

It consists of the following

Front . Nuxt.js+tailwind css
Backend…. .Python
APIization…. AWS
Hosting…. Vercel

The same configuration has been used for almost all of the following applications I’ve released so far.

The way to make API, you can see this article.

How to add effects

For the background effect, the outline is extracted, cropped, and then combined with the image.

# Add a background effect def back_filter(src, manga, effect, th):
    # Grayscale conversion     img = cv2.bitwise_not(src)
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    effect = cv2.cvtColor(effect, cv2.COLOR_BGR2GRAY)

    # Resize the screen tone image to the same size as the input image.     effect = cv2.resize(effect,(img_gray.shape[1],img_gray.shape[0]))

    # binarization     ret, img_binary = cv2.threshold(img_gray, th, 255,cv2.THRESH_BINARY)

    # Contour extraction     contours, _ = cv2.findContours(img_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # Get the contour with the largest area     contour = max(contours, key=lambda x: cv2.contourArea(x))

    mask = np.zeros_like(img_binary)
    cv2.drawContours(mask, [contour], -1, color=255, thickness=-1)
    effect = np.where(mask == 0, effect, manga)

    # Combine tri-level image and contour image     return effect

Enter fullscreen mode Exit fullscreen mode

The effect on top of the image is much simpler, it’s just a composite.

# front effect def front_filter(manga, effect):

    effect = cv2.resize(effect,(manga.shape[1], manga.shape[0]))
    # Mask Image Generator     mask = effect[:,:,3]

    # Grayscale the effect     effect = cv2.cvtColor(effect, cv2.COLOR_BGR2GRAY)

    # Combine tri-level image and contour image     manga = np.where(mask == 0, manga, effect)
    return manga

Enter fullscreen mode Exit fullscreen mode

For the above configuration, the background effect is saved as jpg, and the effect on the image as png, transparent background image.

That’s it!

Thanks for reading.
This is some kind of a joking app, but I am very happy if you enjoy it!

Please send me a message if you need.

yuiko.dev@gmail.com
https://twitter.com/yui_active

tryAtleast1Article1week (17 Part Series)

1 React + TypeScript: Face detection with Tensorflow
2 UI Components website Released!
13 more parts…
3 I made 18 UI components for all developers
4 Image Transformation: Convert pictures to add styles from famous paintings
5 Developed an app to transcribe and translate from images
6 Generate Open Graph images with Next.js and TypeScript on Vercel
7 OpenCV in Lambda: Created an API to convert images to Pokémon ASCII art style using AWS and OpenCV
8 Developed a web app to convert images to Pokémon ASCII art style
9 Using OpenCV: Developed a web app to convert images to manga style
10 TensorFlow + Next.js + TypeScript: Remove background and add virtual background image with web camera
11 Use Pose detection of TensorFlow with Next.js and TypeScript: Let’s become pictograms with Pose detection #Tokyo2020
12 Recognize facial expressions and change face to Emoji using face-api.js with Next.js+TypeScript
13 Look back on these three months: I release an app and write an article every week
14 Create a coloring book in canvas: Developed an app that you can create and enjoy your own original coloring from an image.
15 Try AR: Using AR.js to Display a wolf
16 Developed the app which trains your facial expressions: face-api.js + Next.js + TypeScript
17 Released an app to train facial expressions: Multilingual in Next.js, dynamic OGP, facial expression recognition in face-api.js

原文链接:Using OpenCV: Developed a web app to convert images to manga style

© 版权声明
THE END
喜欢就支持一下吧
点赞15 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容