ABCDEFGHIJKL
1
2
GMUM PROJECTS
3
4
5
6
7
PROJECT NAMEKEYWORDSDESCRIPTIONPEOPLECONTACTSTUDENT NEEDEDREQUIREMENTS/ADDITIONAL INFO
8
Few shot learning (with hypernetworks)Few shot learning , Meta Learning Few-shot learning (FSL), also referred to as low-shot learning (LSL) in few sources, is a type of machine learning problems where the training dataset contains limited information. Few-shot learning aims for Deep learning models to predict the correct class of instances when a small amount of examples are available in the training dataset.
Przemysław Spurek
przemyslaw.spurek[AT]uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
9
Interpretable Latent Space Interpolation in Molecular/Graph Generative Models deep learning, cheminformatics, autoencodersThe goal of this project is to interpolate in the latent space of an autoencoder model so that molecules (graphs) from the interpolation path use only "legal" operations. Legal operations in chemistry are those that can be achieved in a chemical reaction. More generally, each interpolation step in graphs should involve one operation such as adding nodes, removing nodes, or modifying connections in the graph.Tomasz Danel, Igor Podolaktomasz.danel[AT]ii.uj.edu.plYESPyTorch
10
Graph Convolution Neural NetworkGraph Convolution Neural NetworkGraph Convolutional Networks (GCNs) have recently become the primary choice for learning from graph-structured data, superseding hash fingerprints in representing chemical compounds. However, GCNs lack the ability to take into account the ordering of node neighbors, even when there is a geometric interpretation of the graph vertices that provides an order based on their spatial positions. To remedy this issue, we propose Geometric Graph Convolutional Network which uses spatial features to efficiently learn from graphs that can be naturally located in space. Our contribution is threefold: we propose a GCN-inspired architecture which (i) leverages node positions, (ii) is a proper generalisation of both GCNs and Convolutional Neural Networks (CNNs), (iii) benefits from augmentation which further improves the performance and assures invariance with respect to the desired properties.Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
11
Continual learningcontinual leraning, meta learningProjekt w stadium rozwoju opiera się na wykorzystaniu hypernetworków do zadania continual learninguPrzemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
12
Uncertainty estimation for early-exit methods improvementconditional computation, uncertaintyEarly-exit methods decrease processing time for "easy" samples. Usually, the entropy or confidence (max softmax probability) of a head is compared to a threshold to decide whether to exit early. However, it is not clear if this is the best design choice. Fortunately, multiple computationally cheap uncertainty estimation methods were published recently. This project would explore the application of that methods for improvement of early-exit results (inference time and accuracy). Student would have to reimplement two or three (simple) uncertainty estimation methods - described in published papers - in a early-exit setup. Depending on their preferences, the student can start with a given code repository or from scratch.Bartosz Wójcikbartwojc[AT]gmail.comYESPyTorch
13
Analizowanie chmur punktów 3Ddeep neural networksProjek polega na tworzeniu modeli generatywnych dedykowanych do obiektów 3D.
Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
14
Augumenting SGD optimizers with low dimensional 2nd order information
SGD optimization
SGD optimization is currently dominated by 1st order methods like Adam. Augumenting them with 2nd order information would suggest e.g. optimal step size. Such online parabola model can be maintained nearly for free by extracting linear trends of the gradient sequence (arXiv: 1907.07063), and is planned to be included for improving standard methods like Adam.
Jarek Dudajaroslaw.duda[at]uj.edu.plYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
15
Hierarchical correlation reconstructionmodelling joint distribution, non-stationarityHCR combines advantages of machinie learning and statistics: at very low cost offers MSE optimal model for joint distribution of multiple variables as a polynomial, by decomposing statistical dependnecies into (interpretable) mixed moments. It allows to extract and exploit very weak statistical dependencies, not accessible for other methods like KDE. It can also model their time evolution for non-stationary time series e.g. in financial data. This project devolps the method and searches for its further applications (slides: https://www.dropbox.com/s/7u6f2zpreph6j8o/rapid.pdf ). Jarek Dudajaroslaw.duda[at]uj.edu.plYESPreferred experience in mathemtics, statistics.
16
Rand-index classificationhierarchical classification, decision trees and neural networks, interpretable MLWe want to use pairwise loss as a classification loss. The goal is to construct a graph/tree structure for hierarchical classification. In constrat to typical entropy and gini index measures used in decision trees, pairwise loss can be traied directly on pairse and not on sets. Given a tree structure, we can use existing labels for defining classification rules. The model will be used in multi-label classification, extreeme classificaiton, interpretable models, etc.Marek Śmieja, Jacek Tabor et al.marek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
17
Semi-supervised learninig on tabular datasemi-supervised classification, real-life datasetsMost of semi-supervised and self-supervised models are designed for image data, where augmentations are straightforward. In this project, we will investigate whether similar ideas can be used for tabular data (medial data, market data, etc.), which are common in many applications (example of such model: https://papers.nips.cc/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Paper.pdf). For this purpose, we will design specific augmentations for tabular data and apply typical paradigms of semi-supervised learning e.g FixMatch: https://arxiv.org/abs/2001.07685Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
18
Uncertainty in semi-supervised and active learningsemi-supervised learning, active learning, uncertainty scoresTypical semi-supervised and active learning models are based on softmax probability model (https://arxiv.org/abs/2001.07685). However, softmax is not the best measure for uncertainty estimation, which is crucial to decide which examples should be labeled. We will examine one-vs-all probability model (based on a sequence of sigmoids) in these settings.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
19
Hypernetworks for data streamshypernetworks, sequential data, time seriesHypernetworks is a recent neural network model, which has been succesfullly applied in image restoration problems, 3D point clouds, etc. Data streams, as a sequence of data points, have similar characteristic. In this project we will investigate whether hypernetworks can be used for this data type. We will compare it with typica recurrect and convolutional networks.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
20
Contrastive learning for multi-label classification with missing labelscontrastive learning, self-supervised learning, ranking loss, missing dataContrastive learning is a promissing idea used in semi-supervised and unsupervised problems (https://github.com/sthalles/SimCLR). It relies on using data augmentations which induces a natural similarity between examples. In this project we will focus on multi-labels case and use similarity between labels as a way of defining similarity.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
21
Conditional hypernetwork for continuous image generationgenerative models for images, hypernetworksConditional image generation relies on generating images from specific classes or with desired features (https://towardsdatascience.com/understanding-conditional-variational-autoencoders-cd62b4f57bf8). It was shown that hypernetworks can be used as a functional image represention and applied in typical (unconditional) generative models (https://arxiv.org/pdf/2011.12026.pdf). We investigate whether hypernetworks can be also used for conditional image generation.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
22
Case-based intepretabilitydeep networks, interpretabilityAim of the project is to develop new self-explainable neural networks that base on the case-based methodology.Dawid Rymarczyk, Bartosz Zieliński, Łukasz Struski, Jacek Taborbartosz.zielinski[AT]uj.edu.plYESThe student needs to know how to program in both pytorch (In order to undestand the present code and implement new methods).
23
Self-explainable multiple instance learningdeep learning, weakly-supervised learning, multiple instance learningThe project aims to develop new architecutures for weakly supervised setups, especially for medical images.Bartosz Zieliński, Dawid Rymarczyk, Adriana Borowabartosz.zielinski[AT]uj.edu.plYESThe student needs to know how to program in pytorch (In order to undestand the present code and implement new methods).
24
Visual probing taskdeep learning, probing tasks, computer vision, explainabilityThe project aims to develop new methods for explaining image representation obtained from the self-supervised methods.Bartosz Zieliński, Tomasz Trzciński, Witold Oleszkiewicz, Dominika Basajbartosz.zielinski[AT]uj.edu.plYESThe student needs to know how to program in pytorch (In order to undestand the present code and implement new methods).
25
Segmentation learning with region of confidencedeep neural networks, learning mehodsWhat if not all objects on single sample are labeled? Can we develop method for learnig deep models in such case?Krzysztof Misztalkrzysztof.misztal@uj.edu.plYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
26
Generative Models in Drug Designdeep learning, cheminformatics, generative modelsWe want to find a way to generate chemical molecules that is useful from the perspective of drug design process. Primarily, the new generative model should be able to follow given structural constraints and generate structural analogs, i.e. molecules similar to previously seen promising compounds.Tomasz Danel, Łukasz Maziarkatomasz.danel[AT]ii.uj.edu.plYESPyTorch and Tensorflow
27
Adaptations of travel behaviour in agent-based urban mobility modelagent-based, reinforcerment learning, two-sided mobility, urban mobilityYou will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular travellers can decide among platforms (Uber or Lyft) or opt-out (use public transport) - based on prevoious experiences. They, however, need to learn which actions are otpimal for them (subjectively). You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents.Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow
28
Distributed learning for CAV in two-sided mobility market agent-based, reinforcerment learning, two-sided mobility, urban mobilityYou will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular drivers (or Connected Autonomous Vehicles- CAV) can reposition and wait for requests in different part of the city. They, however, need to learn when and where it is efficient to reposition. You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents. Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow
29
A benchmark for comparing efficient inference methods and modelsconditional computation, pruning, computationally efficient deep modelsThe project aims to create a unified benchmark for multiple methods that reduce the inference time of deep models. It involves creating a common codebase, implementing multiple methods, and evaluating them on a task of accurate and efficient inference. Most of the methods to be compared are from the conditional computation family.Bartosz Wójcik, Maciej Wołczykbartwojc[AT]gmail.comYESThe student needs to know how to program in PyTorch (In order to undestand the present code and implement new methods).
30
Neural Molecular Dockingdeep learning, geometric deep learning, cheminformatics, computer-aided drug designThe goal is to create a one-shot model that predicts docking poses of the molecules inside a binding pocket. Drugs bind to the binding pocket of the target protein to modulate its functions. Typically, this drug-target interactions can be modelled by molecular docking, which predicts binding pose of the compound in 3D space. Molecular docking methods are time-consuming due to the optimization methods used. In the project, we want to develop a neural network that can quickly generate docking poses or even distributions over possible docking poses.Tomasz Danel, Przemysław Spurektomasz.danel[AT]ii.uj.edu.plYESPyTorch (preferable PyTorch Geometric)
31
Interpretable Graph Neural Networks Using Prototypes (with Applications in Drug Discovery)deep learning, explainable artificial intelligence, graph neural networks, interpretability, cheminformaticsMolecular graphs are a popular representation of molecules in machine learning. State-of-the-art methods for predicting molecular properties are based on graph neural networks. However, the interpretability of these methods is limited. In the project, we plan to implement prototype-based method for interpreting results of graph neural networks. The idea is borrowed from computer vision, where prototypes were used with convolutional neural networks to provide explanations for the predictions. Each prototype corresponds to some image feature that can be easily recognized by a human eye.Tomasz Danel, Dawid Rymarczyktomasz.danel[AT]ii.uj.edu.plYESPyTorch (preferable PyTorch Geometric)
32
Aggregation Methods for Molecular Graphsgraph neural networks, cheminformatics, clusteringIn this project, we aim to investigate different approaches of graph aggregation in order to find a method best suited for molecular data. We will also implement a pooling layer that is based on chemical fragments, e.g. functional group, to simplify the molecular graph input, and (hopefully) increase the predictive performance of the network.Tomasz Danel, Ewa Swatowskatomasz.danel[AT]ii.uj.edu.plYESPyTorch (preferable PyTorch Geometric)
33
Mol2Image Translation: Generation of High-Content Images Based on Chemical Structuresconvolutional neural networks, computer vision, cheminformaticsHigh-Content Screening is a technology that accelerates drug discovery pipelines by providing a fast method for screening vast numbers of chemical compounds and analysing the output images from fluorescence microscopy (Google "high-content screening", the images are quite cool). In this project we want to create a generative model that transforms chemical structures (SMILES strings or molecular graphs) into images representing phenotypical changes in cellular systems.Tomasz Daneltomasz.danel[AT]ii.uj.edu.plYESPyTorch or Tensorflow
34
Interpretable Uncertainty in Molecular Datacheminformatics, machine learning, uncertaintyIn drug discovery pipelines, it is important to accurately predict molecular properties for yet unseen chemical compounds. Of course, the quality of predictions decreases as we depart from the known chemical space. There are some methods for assessing uncertainty of predictions for out-of-domain data, e.g. conformal prediction, but they do not provide us with explanations, why the predictions were marked as uncertain. We plan to create an interpretable uncertainty estimator that indicates "strange" chemical fragments (different from the training set) in the compound.Tomasz Daneltomasz.danel[AT]ii.uj.edu.plYESML basics (scikit-learn, numpy), basic understanding of chemistry is a plus (solid high-school level)
35
Contrastive Learning for Graphsgraph theory, graph neural networks, cheminformaticsThe aim of this project is to create a method for unsupervised/self-supervised/semi-supervised pre-training of graph neural networks using contrastive learning. Contrastive methods use graph similarity to learn representation of the input graph data. In some domains such as chemistry, graph edit distance does not describe well dissimilarities between graphs. We plan to create better methods, e.g. which are aligned with the perception of chemists in the molecular domain.Tomasz Daneltomasz.danel[AT]ii.uj.edu.plYESPyTorch or Tensorflow
36
Ride-pooling heuristics: combinatorial explosion and supervised learningsupervised learning, graph theory, urban mobility, transportYou will apply ride-pooling algorithm which pools travellers (e.g. of Uber) into attractive groups. You will use ExMAS (https://github.com/RafalKucharskiPK/ExMAS) which provides exact analytical search in the combinatorically exploding search space (e.g. for 1000 trip requests there is almost a googol number of possible 5-person groups). You will use this analytical results to train supervised machine learning and explore the ways to make the search space searchable.Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow, optimization, ILP, networkX
37
Predicting pooled rides in Chicago (dataset of 5mln trips)supervised learning, graph theory, urban mobility, transport, XAIIn the dataset of 5mln trips made with Uber in Chicago some of the are pooled - travel together (20%). Which and why. Can we use this dataset to sucesfully predict which of them will be pooled and what factors influence it? This paper scratched the surface, let's go deeper: https://doi.org/10.1177/0361198120915886Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow, pandas, XAI
38
Model compression in Transformed-Based Language ModelsNLP, model compression, transformersThe goal of this research is to propose complete methodology for compressing large language models based on Transformer architecture.Klaudia Bałazyklaudia.balazy[at]doctoral.uj.edu.plYESPyTorch or Tensorflow, pandas, XAI
39
Non-supervised learning in Conditional Computationconditional computation, early exit, transfer learning, semi-supervised learningThe goal is to extend out NeurIPS paper (link: https://arxiv.org/abs/2106.05409) to other applications such as transfer learning, unsupervised learning, semi-supervised learning and others.Maciej Wołczykmaciej.wolczyk[at]gmail.comYESPyTorch
40
41
42
43
GMUM PROJECTS (Not actively looking for students)
44
This projects are not acctively looking for students. However, if you find a project interesting and would like to know more about it,
you may contact the person under the "CONTACT" tab.
45
46
New representations for moleculesdeep learning, cheminformaticsWe would like to find new representations for molecules. We could work on both, new embedding methods for molecules or in new input representations for graph neural networks.Łukasz Maziarka, Tomasz Danel, Agnieszka Pochalukasz.maziarka@student.uj.edu.plNOPython, PyTorch, TensorFlow
47
Continual Learning with Experience Replaydeep learning, continual learning, experience replayCatastrophic forgetting occurs in neural networks - when training on a new task, the model completely forgets what it has learned on previous tasks. One of the most efficient ways of combating catastrophic forgetting is experience replay - retraining on small set of examples from previous tasks. However promising, this approach has not been properly explored. This project aims to understand and improve experience replay methods for continual learning.Maciej Wołczyk, Marek Śmieja, Jacek Tabormaciej.wolczyk[AT]gmail.comNOPyTorch basics
48
Conditional Computation for Efficient Inferencedeep learning, model compression, conditional computationHuman brain can adaptively change the amount of resources used for the current task. However neural networks constantly use all their available resources for any example. This is not only inconsistent with the biological perspective, but also highly inefficient. We work on approach that uses less resources (layers, neurons) for easy examples and uses all available resources for difficult examples.Maciej Wołczyk, Bartosz Wójcik, Marek Śmieja, Jacek Tabormaciej.wolczyk[AT]gmail.comNOPyTorch basics
49
Hypernetworks Knowledge Distillationdeep learning, teacher-student, computer vision, super resolutionWe are using two hypernetworks in teacher-student manner to solve superresolution task.Maciej Wolczyk, Szymon Rams, Tomasz Danel, Łukasz Maziarkalukasz.maziarka@student.uj.edu.plNO
50
Aspect Level Sentiment ClassificationNatural Language Processing, Sentiment Classification, Attention Modeling, Deep LearningAspect-level sentiment classification aims to identify the sentiment expressed towards some aspects
given context sentences. Recently Hu et al. proposed CAN (https://arxiv.org/pdf/1812.10735.pdf).
However, such a mechanism suffers from a major drawback. Specifically, it seems to overly focus
on a few frequent words with sentiment polarities and little attention is laid upon low-frequency
ones. Our potential solution to the mentioned issue is supervised attention.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
51
Molecule Representation for Predicting Drug-Target InteractionDeep Learning, Representation Learning, CheminformaticsAn essential part of the drug discovery process is predicting drug-target interactions. However, the
process is expensive in terms of both time and cost. Obviously, a precisely learned molecule
representation in a drug-target interactions model could contribute to developing personalized
medicine which will help many patient cohorts. We want to propose a few molecule representations
based on various concepts including deep neural networks but not limited to.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
52
Deep learning for molecular designDeep Learning, Cheminformatics, Molecular DesignSearching new molecules in areas such as drug discovery usually starts from the core structures of
candidate molecules to optimize the properties of interest. Our present work proposes a graph
recurrent generative model for molecular structures. The model incorporates side information into
recurrent neural network.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
53
Optimization in deep policy gradient methodsDeep Learning, Reinforcement Learning, OptimizationDeep policy gradient methods, which are currently one of the most used tools of reinforcement learning researchers, have some non-obvious optimization properties. We investigate such questions as: why is PPO more efficient than TRPO, how important are various tricks used when implementing PPO, how can we improve the sample efficiency of these methods?Maciej Wołczykmaciej.wolczyk[AT]gmail.comNO
54
Optimization in neural networks without backpropagation and gradientsDeep Learning, Optimization, Bio-inspiredNeuroscientific studies of mechanisms of the learning in the brain suggest that backpropagation (and especially backpropagation through time, as in RNNs) may not be a viable method of learning in neural structures. We want to explore other, more biologically justified approaches to this problemJacek Tabor, Aleksandra Nowak, Maciej Wołczykmaciej.wolczyk[AT]gmail.comNO
55
Fidelity-Weighted Learningneural networks, deep neural networks, learning methodsFidelity weighted learning { it is a student-teacher method for learning from labels of varying quality.Krzysztof Misztal, Agnieszka Pochakrzysztof.misztal@uj.edu.plNO
56
Generating Active Compounds Through Predicting Molecular Docking Componentsdeep learning, cheminformatics, computer-aided drug design, molecular simulationIn medicinal chemistry, to assess which chemical compounds will be active towards a given target, a library of promising coumpounds is docked to the target (in simulation). Compounds which dock well are promising drug candidates that will be synthesized. We want to generate compounds that dock well instead of using real activity.Tomasz Danel, Łukasz Maziarka, Igor Podolak, Stanisław Jastrzębskitomasz.danel[AT]ii.uj.edu.plNO
57
Continual Learning in vision tasks continual learning, deep networksAim of the project is to develop a method for continual learning of deep neural network architectures. Such a model should be able to learn new tasks and not forget the previous one, when some part of the model is shared between tasks and the lowest possible old task resources is kept to prevent the forgetting. Jacek Tabor, Igor Podolak, Bartosz Zieliński, Łukasz Struski, Dawid Rymarczykbartosz.zielinski[AT]uj.edu.plNO
58
Neural networks adapting to datasets: learning network size and topologydeep learning, network pruning, neural architecturesWe introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a standard gradient-based training. The resulting network has the structure of a graph tailored to the particular learning task and dataset. The obtained networks can also be trained from scratch and achieve virtually identical performance. We explore the properties of the network architectures for a number of datasets of varying difficulty observing systematic regularities.Romuald Janik,
Aleksandra Nowak
aleksandrairena.nowak[AT]doctoral.uj.edu.plNOhttps://arxiv.org/abs/2006.12195
59
Relationship between disentanglement and multi-task learningdeep learning, disentanglement learning, multi-task learning, hard parameter sharingOne of the main arguments behind studying disentangled representation is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in many multi-task learning settings. The aim of the project is to take a closer look at the relationship between disentanglement and multi-task learning.
Łukasz Maziarka, Aleksandra Nowak, Andrzej Bedychaj, Maciej Wołczykaleksandrairena.nowak[AT]doctoral.uj.edu.plNO
60
Explaining metabolic stability (pilot study)cheminformatics, explainability, web serviceMetabolic stability is one of several molecular properties optimised in drug design pipelines. It is connected with the duration of desirable therapeutic effect of the drug.

The exact mechanisms of drug metabolism are yet to be discovered. Explaining predictions of machine learning models can give us ideas about which chemical structures are important.

We plan to publish a pilot study in International Journal of Molecular Sciences (IF: 4.56, number of ministerial points: 30). Planned submission date is by the end of the year.
Agnieszka Pocha, Sabina Podlewskaagnieszka.pocha[at]doctoral.uj.edu.plNOThe AI and explainability parts are mostly done. We need a student to build an interactive webpage which will present the existing explanations and generate new ones (using provided python API) for molecules uploaded by users.

We require knowledge on designing and building webpages, as well as standard technologies including HTML, CSS and Javascript.
61
Semi-supervised siamese neural networksiamese neural network, semi-supervised learningSiamese networks are used to label new example when we have a large number of classes: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf. We focus on designing semi-supervised versions of siamese networks, where we use only small number of labeled examples. Example of such method: https://www.ijcai.org/proceedings/2017/0358.pdf. Marek Śmiejamarek.smieja[AT]ii.uj.edu.plNO
62
Clustering with pairwise constraintsclustering, pairiwise constraints (must-link, cannot-link), semi-superbised learninigClustering is ill-posed problem. Making use of a small number of labeled data, we can specify what we mean by similarity. This is the area of semi-supervised clusterng. We will focus on constructing discriminativ clustering models which take the information about labeled data into account.Marek Śmiejamarek.smieja[AT]ii.uj.edu.plNO
63
Generative model in multi-label casegenerative models, flow models, disentanglement, multi-label classificationWe construct a semi-supervised generative model for partially labeled data. More precisely, every example can be labeled using many binary attributes but we have only access to a few labels. Such a generative model should allow for generating new examples with desired properties (labels). Marek Śmieja, Maciej Wołczyk, Łukasz Maziarkamarek.smieja[AT]ii.uj.edu.plNO
64
Multi-output regression for object trackinggenerative models, image processing, object detection, hypernetworks, clustering, regressionWe consider the problem of predicting object position. As the future is uncertain to a large extent, modeling the
uncertainty and multimodality of the future states is of great relevance. For this purpose a generative model that takes the multimodality and uncertainty into account. The of the project is to to compare with https://openaccess.thecvf.com/content_CVPR_2019/papers/Makansi_Overcoming_Limitations_of_Mixture_Density_Networks_A_Sampling_and_Fitting_CVPR_2019_paper.pdf
Marek Śmieja, Jacek Tabor, Przemysław Spurekmarek.smieja[AT]ii.uj.edu.plNO
65
Auto-encoder with discrete latent spaceauto-encoder, generative models, discrete variables, reparametrization trick, importance samplingCategorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this project we want to an auto-encoder model with discrete latent space, see https://arxiv.org/pdf/1611.01144.pdfMarek Śmieja, Jacek Tabor, Łukasz Struski, Klaudia Balazymarek.smieja[AT]ii.uj.edu.plNO
66
Learning neural networks from missing datamissing data, convolutiona neural networksProject concers the problem of training convolutional neural networks on missing data directly. We plan to extend the following model: https://papers.nips.cc/paper/7537-processing-of-missing-data-by-neural-networks.pdfMarek Śmieja, Łukasz Struski, Jacek Tabormarek.smieja[AT]ii.uj.edu.plNO
67
Pharmacophoric Autoencoderdeep learning, cheminformatics, autoencodersThe plan is to create an autoencoder that works with molecular graphs embedded in 3D space. Such a generative model should be able to generate compounds with some pre-defined 3D constraints. As an input it takes 3D molecular graphs and pharmacophoric features represented as 3D points. The 3D positions are generated by a molecular docking software. The pharmacophoric (3D) constraints are given in the latent space of this model.Tomasz Danel, Łukasz Maziarka, Bartosz Podkanowicz, Artur Kasymov, Sabina Podlewska, Marek Śmieja, Igor Podolaktomasz.danel[AT]ii.uj.edu.plNOPyTorch
68
Non-Gaussian Gaussian Processesdeep learning, meta-learning, few-shot learning, regression, gaussian processes, normalizing flowsGaussian Processes (GPs) have been widely used in machine learning to model distributions over functions, with applications including multi-modal regression, time-series prediction, and few-shot learning. GPs are particularly useful in the last application since they rely on Normal distributions and, hence, enable closed-form computation of the posterior probability function. Unfortunately, because the resulting posterior is not flexible enough to capture complex distributions, GPs assume high similarity between subsequent tasks – a requirement rarely met in real-world conditions. In this work, we address this limitation by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP, which makes the GP posterior locally non-Gaussian.Marcin Sendera, Jacek Tabor, Aleksandra Nowak, Andrzej Bedychaj, Massimiliano Patacchiola, Tomasz Trzciński, Przemysław Spurek, Maciej Ziębamarcin.sendera[AT]doctoral.uj.edu.plNOPyTorch
69
Better Knowledge Transfer for Non-Gaussian Gaussian Processes (Continuous Object Tracking)deep learning, meta-learning, few-shot learning, object tracking, gaussian processes, normalizing flowsThis project is based on the previous Non-Gaussian Gaussian Processes (NGGP) work. We are going to enhance the NGGPs' flexibility by introducing the full information coming from the support data in Few-Shot Learning setting. The main aim is to enable much faster learning and knowledge transfer from task to task. We also want to apply such a solution for solving the continuous object tracking problem. Specifically, to generate the probability density function of object position over time, assuming the knowledge of the discrete set of past positions. Marcin Sendera, Jacek Tabor, Massimiliano Patacchiola, Przemysław Spurek, Maciej Zięba, Rafał Nowakmarcin.sendera[AT]doctoral.uj.edu.plNOPyTorch
70
Normalizing Flows in Anomaly Detectiondeep learning, generative models, normalizing flows, anomaly detectionAnomaly detection is the problem of identification the abnormal or novel data from the normal ones. We propose to utilize the flexibility of Normalizing Flows models, which could be treated as bijections from the original space to new one latent space. This property allows for introducing the algebraic expressions for the points' density in a latent space explicitly. We utilize various forms for objective functions combined with different flow models to discover anomalies.Marcin Sendera, Jacek Tabormarcin.sendera[AT]doctoral.uj.edu.plNOPyTorch
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100