AMV 2019

Special session: Machine Learning in Advanced Machine Vision (AMV2019)

Thursday, 19th of December

A special session in conjunction with ICMLA2019 in Boca Raton, Florida, USA.

Award

At the AMV2019 special session we arranged for a best paper award of 700 euro sponsored by ROBOVISION. The award was assigned in the following way. From the top submissions (scored by reviewers on technical content and by organizers on clarity and fit to the special session) the best 4 scoring papers were considered. After oral presentation (to incorporate the clearness of the presentation) of these papers, the organizers decided to award the best paper award to Benjamin Lutz on his work titled Evaluation of Deep Learning for Semantic Image Segmentation in Tool Condition Monitoring



Finally we would officially like to thank all the speakers for submitting papers to the AMV2019 session and all the attendees for making this workshop worth organizing. We clearly see there is a vast interested in a platform that brings together researchers and practitioners of advanced computer vision and machine learning. As organizers we hope that the AMV workshop can be this platform for the following years to come. We hope to see everyone at the AMV2020 workshop!


Home

We are proud to present a special session on Machine learning in Advanced Machine Vision (AMV2019). This special session is organized in conjunction with ICMLA2019 and taking place in Boca Raton, Florida, USA. Oral presentations will be scheduled on Thursday the 19th of December 2019 (TBC) while poster presentations for this special session will be organized on Wednesday the 18th of December 2019 (TBC).

A large variety of industrially oriented applications (e.g. quality control, pick and place) have in the past decades been successfully implemented throughout a wide range of industries. These implementations are characterized by very controlled surroundings and objects (e.g. CAD models of objects available, controlled lighting). Advanced Machine Vision refers to computer vision and machine learning - based systems where such assumptions do not hold, for example, when handling biological objects as seen in the food-production industry or when operating outdoors. With recent advancements in sensing and processing power, the potential for further automation in industry based on computer vision and machine learning is clearly present. Furthermore, the exploding domain of computer vision and machine learning algorithms (e.g. deep learning) provides dozens of new opportunities. However, there is in general a major gap between the topics in focus at major international computer vision and machine learning conferences and the actual industrial needs. More often approaches are hardly transferable into practical and robust solutions for industrial challenges. The ambition of this workshop is to close this gap, by bringing together both academics and practitioners from the field.

The special session will take place at the ICMLA2019 conference venue.


Call For Papers

Please find a pdf version of this call for papers by clicking here

The ambition of this full-day AMV2019 special session is to bring together practitioners and researchers from different disciplines related to Advanced Machine Vision to share ideas and methods on current and future use of computer vision and machine learning algorithms in real-life and industrially relevant systems. This field raises the need of applied research that focusses on the technology transfer from academics towards practitioners, yielding several challenges like top-notch accuracies, real-time processing, minimal training data, minimal manual input, user-friendly interfaces, …

To this end we welcome contributions with a strong focus on (but not limited to) the following topics within Advanced Machine Vision:

  • Data input sources

    • Data fusion
    • Multi-modal data
  • Improving robustness of algorithms

    • Real-time performance
    • Non-controlled illumination
    • Non-trivial intra object variability
    • Top-notch accuracies
  • Removing or reducing the need of training data

    • Data augmentation
    • Artificial data
  • Processing power and memory requirements

  • Obtaining training data and ground truth annotations

  • Lab testing versus inline testing

  • Transfer learning towards new applicational domains

  • Deep learning for advanced machine vision

  • Quality assessment of non-trivial objects

  • Real-life and industrially relevant applications

The special session has a best paper award of 700 euro sponsored by ROBOVISION.

My profile picture.


Submission

Authors are encouraged to submit high-quality, original (i.e. not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference or workshop) research.

Papers submitted for reviewing should be conform to IEEE specifications. Manuscript templates can be downloaded from the IEEE website.

Papers are limited to 8 pages. All the papers will go through double-blind peer review process. Authors names and affiliations should not appear in the submitted paper. Authors prior work should be cited in the third person. Authors should also avoid revealing their identities and/or institutions in the text, figures, links, etc. A paper should be submitted to one track only. Submission of the same paper to multiple tracks will result in rejection of the paper.

After review, authors will either be accepted as regular papers (with an oral presentation) or as short papers (with a poster presentation).

  • Regular papers should be up to 6 pages long. Authors of regular papers can add up to 2 extra pages, at an additional cost of $50 per page. A regular paper cannot be more than 8 pages long. References and any other additional material must be included in this number of pages.

  • Short papers should be up to 4 pages long. Authors of short papers can add up to 2 extra pages, at an additional cost of $50 per page. A short paper cannot be more than 6 pages long. References and any other additional material must be included in this number of pages.

All submissions are handled through the CMT submission website of the ICMLA2019 conference: CLICK HERE FOR SUBMISSION Make sure that you select the correct workshop (Machine Learning in Advanced Machine Vision) at the top left corner of the CMT submission website, when hitting the button “new submission”.

Authors are requested to submit their paper in a single PDF file (maximum file size 20MB). Submission of supplementary material is optional (up to 100MB). Accepted file formats for supplementary material are: PDF, PNG, JPG, GIF, ZIP, MP4, WMV, MPEG or AVI.

For questions/remarks regarding the submission e-mail: steven[dot]puttemans[at]kuleuven[dot]be.


Important Dates

  • Paper submission deadline: September 7, 2019 (11:59 Pacific Time) - FIRM deadline
  • Notification of acceptance + reviews available: October 7, 2019
  • Camera-ready papers & pre-registration deadline: October 24, 2019
  • The ICMLA conference: December 16-19, 2019
  • Special session timeslot: December 19th, 2019 | session 21 | 3:15PM - 5:30PM | room Oasis A&B

People

Workshop Organizers


My profile picture.
Thomas B. Moeslund
Professor
Visual Analysis of People Lab
Aalborg University
Denmark

Website: click here
My profile picture.
Rikke Gade
Assistant Professor
Visual Analysis of People Lab
Aalborg University
Denmark

Website: click here
My profile picture.
Toon Goedeme
Professor
EAVISE Research Group
KU Leuven
Belgium

Website: click here
My profile picture.
Steven Puttemans
Post-doctoral researcher
EAVISE Research Group
KU Leuven
Belgium

Website: click here
My profile picture.
Ajmal Mian
Professor
School of Physics, Mathematics & Computing, The University of Western Australia
Australia

Website: click here

Supporting universities and research labs


My profile picture.
My profile picture.
My profile picture.

Program Committee


The organizing committee would like to thank all members of the program committee for the work they invest in assuring that our AMV2019 special session at ICMLA2019 achieves a high-quality standard!

Aleksandra Pizurica, Ghent University
Antonio Haro, HERE Technologies
Bart Goossens, Ghent University
Benhur OrtizJaramillo, Ghent University
Benjamin J Biggs, University of Cambridge
Björn Barz, Friedrich-Schiller-University Jena
Brian Booth, University of Antwerp
Christian Reimers, Friedrich-Schiller-University Jena
Christoph Theiß, Friedrich-Schiller-University Jena
Clemens A Brust, Friedrich-Schiller-University Jena
Cosmin Copot, University of Antwerp
Dimitri Korsch, Friedrich-Schiller-University Jena
Dimitrios Mallis, University of Nottingham
Dries Hulens, KU Leuven
Enrique Sanchez, Samsung AI Centre, Cambridge
Eric Demeester, KU Leuven
Floris De Smedt, RoboVision
Gary A Atkinson, Bristol Robotics Laboratory
Geert De Cubber, Royal Military Academy
Hiep Q Luong, Ghent University
Ignas Budvytis, University of Cambridge
Jesus Pestanapuerta, TU Graz
Jie Chen, University of Oulu
Jifei Song, Queen Mary University of London
Johan Philips, KU Leuven
Jose Luis Sanchez-Lopez, University of Luxembourg
Khanh Duc, NVIDIA
Kristof Van Beeck, KU Leuven
Lei Shi, University of Antwerp
Linda Wills, Georgia Institute of Technology
Lukas Von Stumberg, TU Munich
Marian Verhelst, KU Leuven
Mark Hansen, UWE
Martin Kampel, Vienna University of Technology
Maxime Petit, Universite de Lyon
Modar Alfadly, King Abdullah University of Science and Technology
Mohammadreza Soltaninejad, University of Nottingham
Neil Smith, King Abdullah University of Science and Technology
Oliver Mothes, Friedrich-Schiller-University Jena
Patrick Vandewalle, KU Leuven
Qi Dong, Queen Mary University of London
Rafael De La Guardia, Intel
Roxane Licandro, Medical University of Vienna, TU Wien
Sebastian Zambanini, TU Wien
Sebastiano Battiato, Università di Catania
Serge Miguet, Universite de Lyon
Shital Shah, Microsoft
Simon Donné, Ghent University
Siyang Song, University of Nottingham
Steve Mann, University of Toronto
Sudhanshu Mittal, University Of Freiburg
Sven Sickert, Friedrich-Schiller-University Jena
Tanguy Ophoff, KU Leuven
Timothy Callemein, KU Leuven
Tinne Tuytelaars, KU Leuven
Tomislav Petković, University of Zagreb
Ulrich Schwanecke, RheinMain University of Applied Sciences
Violeta Teodora Trifunov, Friedrich-Schiller-University Jena
Vladyslav Usenko, TU Munich
Walter Bosschaerts, Royal Military Academy
Wiebe Van Ranst, KU Leuven
Wim Abbeloos, Toyota
Xiaohua Huang, University of Oulu
Xin Liu, University of Oulu
Zhiyi Cheng, Queen Mary University of London


Program

As a special session we will have 2 invited speakers that will talk about the challenges in advanced machine vision applications.

Invited speaker

Ulas Bagci

Explainable Deep Learning for High Risk AI Applications

Biography

My profile picture.
Dr. Bagci is a faculty member at the Center for Research in Computer Vision (CRCV), and the Assistant Professor in University of Central Florida (UCF). His research interests are artificial intelligence, machine learning and their applications in biomedical and clinical imaging. Previously, he was a staff scientist at the NIH's Center for Infectious Disease Imaging (CIDI) Lab, department of Radiology and Imaging Sciences (RAD&IS). Dr. Bagci had also been the leading scientist (image analyst) in biosafety/bioterrorism project initiated jointly by NIAID and IRF. Dr. Bagci obtained his PhD degree from School of Computer Science, University of Nottingham (UK) in collaboration with Radiology department of University of Pennsylvania (with Prof. Udupa, MIPG). He is senior member of IEEE and RSNA, and member of scientific organizations such as Society of Nuclear Medicine and Molecular Imaging (SNMMI), American Statistical Association (ASA), Royal Statistical Society (RSS), AAAS, and MICCAI. He has served as a program committee member for various conferences, and a regular reviewer for many prestigious journals in his fields and received best paper and best reviewer awards. Dr. Bagci is the recipient of many awards including NIH's FARE award (twice), RSNA Merit Certificates (5+ times), best paper awards, poster prizes, and several highlights in journal covers, media, and news.

Tentative program

Oral presentation = 10 min presentation + 2 min Q&A

Time What Who Title
3:15PM Welcome Steven Puttemans - - -
3:20PM Invited speaker Ulas Bagci Explainable Deep Learning for High Risk AI Applications
3:55PM Oral 1 Muhammad Hamdan Mass Estimation from Images using Deep Neural Network and Sparse Ground Truth
4:07PM Oral 2 Timothy Callemein Anyone Here? Smart Embedded Low-Resolution Omnidirectional Video Sensor to Measure Room Occupancy
4:19PM Oral 3 Ning Jia Coarse Annotation Refinement for Segmentation of Dot-Matrix Batchcodes
4:31PM Oral 4 Benjamin Lutz Evaluation of Deep Learning for Semantic Image Segmentation in Tool Condition Monitoring
4:43PM Oral 5 Ji Ling Infrared and Visible Image Fusion via Multi-Discriminators Wasserstein Generative Adversarial Network
4:55PM Oral 6 Dries Hulens Deep Diamond Re-ID
5:07PM Oral 7 Rodrigo Leonardo & Amber Hu Fusing Visual and Textual Information to Determine Content Safety
5:19PM Best Paper Award + Closing remarks Steven Puttemans - - -
5:30PM End of special session - - - - - -