Automated
Knowledge
Base
Construction

10th Workshop on Automated Knowledge Base Construction

While Large Language Models (LLMs) have revolutionized NLP, they remain prone to hallucinations, reasoning “mode-collapse” in open-ended generation, and a lack of factual provenance. The Automated Knowledge Base Construction (AKBC) workshop addresses a key missing piece of the generative era: structured knowledge. Knowledge Bases (KBs) serve as ground truth for fact verification, the semantic backbone for constrained decoding in generation, and as a resource behind Retrieval-Augmented Generation (RAG).

The workshop contributes to the growing momentum around integrating structured knowledge into generative models, both at training time and at inference time.

The workshop is co-located with EMNLP 2026 in Budapest, Hungary. It follows a successful series of previous editions: as an independent conference in 2022, 2021, and 2020, as workshop in 2017 at NIPS, in 2016 at NAACL, in 2014 at NIPS, in 2013 at CIKM, in 2012 at NAACL, and in 2010 as a stand-alone event in Grenoble, France.

News

Important Dates

Direct submission (research + vision)
July 27, 2026 (Monday)
Direct submission (shared task)
August 15 (Saturday)
ARR committment (research)
August 25 (Tuesday)
Notification
September 1 (Tuesday)
Camera ready due
September 15 (Tuesday)
Workshop date
1 day between October 24–29, 2026

Call for Papers

The 10th Workshop on Automated Knowledge Base Construction (AKBC) returns in 2026, bringing together researchers and practitioners working on the construction, integration, and use of structured knowledge in the era of large language models (LLMs). As LLMs continue to transform NLP, challenges such as hallucinations, lack of provenance, and limited reasoning reliability highlight the need for robust, explicit, and usable knowledge representations.

AKBC provides a venue at the intersection of natural language processing, knowledge representation, databases, and machine learning, with a particular focus on how symbolic structure can ground, constrain, and enhance generative models.

We invite submissions on topics including, but not limited to:

Knowledge for Generative Models

Building and Maintaining Knowledge

Retrieval and Reasoning

Vision Papers

Submission Types

We welcome three types of submissions, reflecting both mature research results and forward-looking ideas at the intersection of structured knowledge and generative AI. all submissions have to follow the EMNLP formatting instructions.

1. Regular Research Papers (via ARR or direct submission)

For original research contributions. We accept papers reviewed through the ACL Rolling Review (ARR); authors may submit to ARR and commit their papers to AKBC. Accepted papers will follow standard ACL/EMNLP reviewing policies. The page limit is 8 pages + references.

2. Vision Papers (direct submission)

For bold ideas, emerging directions, and unifying perspectives. We particularly encourage papers that articulate new opportunities and challenges for knowledge base construction in the age of LLMs, and that help define promising research agendas for the field. The page limit is 4 pages + references.

3. Shared Task Papers (direct submission)

For system descriptions and analyses related to the AKBC shared task, co-located with the workshop. These submissions should describe participating systems, methodologies, and lessons learned from the challenge. The page limit is 4 pages + references.

Presentation Formats

Accepted papers will be presented as posters, with a selection invited for lightning talks. The workshop will also feature invited keynotes from leading researchers in academia and industry.

Shared Task

Large language models contain a substantial amount of factual knowledge. Turning that knowledge into reliable knowledge base entries, however, is much harder than answering a single factual question.

Given a subject s and a relation r, predict the complete set of correct object strings {o1, o2, …, ok}. Unlike standard factual QA, a subject may have zero, one, or many correct objects. The goal is to construct a complete and precise KB entry.

Further details are on this subpage: https://www.akbc.ws/2026/shared-task.html

Invited Speakers

Invited speakers will be announced once confirmed. Speakers at previous AKBC workshops and conferences were from Google AI, HuggingFace, Deepmind, Apple, Stanford, the University of Washington, UCSC, CMU, and others.

Organization

Jan-Christoph Kalo, University of Amsterdam, Netherlands
Ndapa Nakashole, University of California, San Diego, USA
Simon Razniewski, ScaDS.AI Dresden/Leipzig & TU Dresden, Germany
Fabian M. Suchanek, Télécom Paris, Institut Polytechnique de Paris, France
Andreas Vlachos, University of Cambridge, United Kingdom
Andrew McCallum, University of Massachusetts Amherst, USA

Contact us at akbc2026@gmail.com for workshop inquiries, or at akbc2026-shared-task@googlegroups.com for shared task inquiries!