Publications

Below you can find the list of publications that used the entropy cluster for scientific computations. The list is not complete and it is still growing!

[1]:

Konrad Czechowski, Tomasz Odrzygóźdź, Marek Zbysiński, Michał Zawalski, Krzysztof Olejnik, Yuhuai Wu, Lukasz Kucinski, Piotr Miłoś. “Subgoal Search For Complex Reasoning Tasks”. In: NeurIPS 2021.

https://proceedings.neurips.cc/paper/2021/hash/05d8cccb5f47e5072f0a05b5f514941a-Abstract.html

[2]:

Maciej Wołczyk, Michał Zając, Razvan Pascanu, Lukasz Kucinski, Piotr Miłoś. “Continual World: A Robotic Benchmark For Continual Reinforcement Learning”. In: NeurIPS 2021.

[3]:

Michał Zawalski, Błażej Osiński, Henryk Michalewski, Piotr Miłoś. “Off-Policy Correction For Multi-Agent Reinforcement Learning”. In: AAMAS 2022.

https://doi.org/10.48550/arXiv.2111.11229

[4]:

Konrad Czechowski, Tomasz Odrzygóźdź, Michał Izworski, Marek Zbysiński, Łukasz Kuciński, Piotr Miłoś. “Trust, but verify: model-based exploration in sparse reward environments”. In: IJCNN 2021.

[5]:

Piotr Kozakowski, Piotr Januszewski, Konrad Czechowski, Łukasz Kuciński, Piotr Miłoś. “Structure and randomness in planning and reinforcement learning”. In: IJCNN 2021.

[6]:

Dominik Filipiak, Piotr Tempczyk, Marek Cygan. “n-CPS: Generalising Cross Pseudo Supervision to n Networks for Semi-Supervised Semantic Segmentation”.

https://arxiv.org/abs/2112.07528

[7]:

Piotr Piękos, Henryk Michalewski, Mateusz Malinowski. “Measuring and Improving BERT’s Mathematical Abilities by Predicting the Order of Reasoning”.

https://arxiv.org/abs/2106.03921

[8]:

Piotr Nawrot, Szymon Tworkowski, Michał Tyrolski, Łukasz Kaiser, Yuhuai Wu, Christian Szegedy, Henryk Michalewski. “Hierarchical Transformers Are More Efficient Language Models”.

https://arxiv.org/abs/2110.13711

[9]:

Spyridon Mouselinos, Henryk Michalewski, Mateusz Malinowski. “Measuring CLEVRness: Blackbox testing of Visual Reasoning Models”.

https://arxiv.org/abs/2202.12162