Visual Computing Talks

    Tuesday 25. April 2017, 13:00

    Title: Where Virtual Meets Real: Perceptually-Driven Inputs for New Output Devices
    Speaker: Piotr Didyk, Max Planck Institute for Informatics & Saarland University
    Location: ICG Seminar Room
    Abstract: There has been a tremendous increase in quality and number of new
    output devices, such as stereo and automultiscopic screens, portable
    and wearable displays, and 3D printers. Some of them have already
    entered the mass production and gained a lot of users’ attention;
    others will follow this trend promptly. Unfortunately, abilities of
    these emerging technologies outperform capabilities of methods and
    tools for creating content. Also, the current level of understanding
    of how these new technologies influence user experience is
    insufficient to fully exploit their advantages. In this talk, I will
    demonstrate that careful combinations of new hardware, computation,
    and models of human perception can lead to solutions that provide a
    significant increase in perceived quality. More precisely, I will show
    how careful rendering of frames can improve spatial resolution beyond
    physical capabilities of display devices. Next, I will discuss
    techniques for overcoming limitations of current 3D displays. In the
    context of 3D printing, I will present methods for specifying objects
    for multi-material 3D printing.

    Short Bio: Piotr Didyk is an Independent Research Group Leader at the Cluster of
    Excellence on ”Multimodal Computing and Interaction” at the Saarland
    University (Germany), where he is leading a group on Perception,
    Display, and Fabrication. He is also appointed as a Senior Researcher
    at the Max Planck Institute for Informatics. Prior to this, he spent
    two years as a postdoctoral associate at Massachusetts Institute of
    Technology (MIT). In 2012, he obtained his Ph.D. from the Max Planck
    Institute for Informatics and the Saarland University for his work on
    perceptual display. During his studies, he was also a visiting student
    at MIT. His research interests include human perception, new display
    technologies, image and video processing, and computational
    fabrication. He focuses on techniques that account for properties of
    the human sensory system and human interaction to improve perceived
    quality of the final images, videos, and 3D prints. More info:
    https://people.mpi-inf.mpg.de/~pdidyk/.

     

     

      Tuesday 04. April 2017, 13:00

      Title: Sparse Label Propagation
      Speaker: Alexander Jung, Assistant Professor, Aalto University
      Location: ICG Seminar Room
      Abstract:In this talk, I present some of our most recent work on applying tools from compressed sensing to (semi-supervised) machine learning with massive network-structured datasets, i.e., big data over networks. We expect the use of compressed sensing ideas game changing as it was for digital signal processing. In particular, I will present a sparse label propagation algorithm which efficiently learns the labels for data points based on the availability of a few labeled training data points. This algorithm is inspired by compressed sensing methods and allows for a simple sufficient condition on the network structure and available label information such that accurate learning is possible.

        Tuesday 02. February 2016, 13:00

        Title: Solving Dense Image Matching in Real-Time using Discrete-Continuous Optimization
        Speaker: Alexander Shekhovtsov, Christian Reinbacher
        Location: ICG Seminar Room
        Abstract: Dense image matching is a fundamental low-level problem in Computer Vision, which has received tremendous attention from both discrete and continuous optimization communities. The goal of this paper is to combine the advantages of discrete and continuous optimization in a coherent framework. We devise a model based on energy minimization, to be optimized by both discrete and continuous algorithms in a consistent way. In the discrete setting, we propose a novel optimization algorithm that can be massively parallelized. In the continuous setting we tackle the problem of non-convex regularizers by a formulation based on differences of convex functions. The resulting hybrid discrete-continuous algorithm can be efficiently accelerated by modern GPUs and we demonstrate its real-time performance for the applications of dense stereo matching and optical flow.

        Tuesday 19. January 2016, 13:00

        Title: BaCoN: Building a Classifier from only N Samples
        Speaker: Georg Waltner
        Location: ICG Seminar Room
        Abstract: We propose a model able to learn new object classes with a very limited amount of training samples (i.e. 1 to 5), while requiring near zero run-time cost for learning new object classes. After extracting Convolutional Neural Network (CNN) features, we discriminatively learn embeddings to separate the classes in feature space. The proposed method is especially useful for applications such as dish or logo recognition, where users typically add object classes comprising a wide variety of representations. Another benefit of our method is the low demand for computing power and memory, making it applicable for object classification on embedded devices. We demonstrate on the Food-101 dataset that even one single training example is sufficient to recognize new object classes and considerably improve results over the probabilistic Nearest Class Means (NCM) formulation.