Templates

Bibliography

Preview

Bibliography

Standalone bibliography and references template

Category

Other

License

Free to use (MIT)

File

bibliography/main.tex

main.texRead-only preview
\documentclass[11pt,a4paper]{article}
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\usepackage{xcolor}
\usepackage{fancyhdr}

\definecolor{darkblue}{RGB}{0,50,100}

\hypersetup{
    colorlinks=true,
    linkcolor=darkblue,
    citecolor=darkblue,
    urlcolor=darkblue
}

\pagestyle{fancy}
\fancyhf{}
\fancyhead[L]{\small Bibliography Template}
\fancyhead[R]{\small 2025}
\fancyfoot[C]{\thepage}
\renewcommand{\headrulewidth}{0.4pt}

\title{\textbf{Bibliography and References Template}\\[0.3em]\large Demonstrating Standard Entry Types}
\author{Research Documentation Office}
\date{2025}

\begin{document}

\maketitle

\section{Introduction}

This document demonstrates a standalone bibliography using the \texttt{thebibliography} environment. It showcases the standard BibTeX entry types: journal articles, books, conference proceedings (inproceedings), technical reports, theses, and miscellaneous sources. Each entry is formatted following conventions common in computer science and engineering literature.

To cite a reference, use the \verb|\cite{}| command. For example, the foundational paper on attention mechanisms~\cite{vaswani2017attention} transformed the field of natural language processing. Convolutional neural networks were popularized by LeCun et al.~\cite{lecun1998gradient}, building on earlier theoretical work~\cite{rosenblatt1958perceptron}. For a comprehensive treatment of machine learning, see Bishop~\cite{bishop2006pattern} or Goodfellow et al.~\cite{goodfellow2016deep}.

Recent advances in large language models~\cite{brown2020language} have drawn on scaling laws~\cite{kaplan2020scaling} and reinforcement learning from human feedback~\cite{ouyang2022training}. The transformer architecture has also been applied to computer vision~\cite{dosovitskiy2021image}. For an overview of optimization methods, see Kingma and Ba~\cite{kingma2015adam}. The ethical implications of these systems have been discussed extensively~\cite{bender2021dangers}, and benchmarking methodologies continue to evolve~\cite{wang2019superglue}.

\section{Annotated Reference List}

Below is a brief annotation for each entry type included in the bibliography:

\begin{description}
    \item[Article] A paper published in a journal or periodical. Required fields: author, title, journal, year. See~\cite{vaswani2017attention}, \cite{lecun1998gradient}, \cite{rosenblatt1958perceptron}.

    \item[Book] A published book with an explicit publisher. Required fields: author/editor, title, publisher, year. See~\cite{bishop2006pattern}, \cite{goodfellow2016deep}.

    \item[Inproceedings] A paper in a conference proceedings volume. Required fields: author, title, booktitle, year. See~\cite{brown2020language}, \cite{dosovitskiy2021image}.

    \item[Techreport] A report published by an institution. Required fields: author, title, institution, year. See~\cite{kaplan2020scaling}.

    \item[Misc] Anything that does not fit other categories---preprints, software, datasets, websites. See~\cite{ouyang2022training}, \cite{kingma2015adam}.

    \item[InCollection] A paper or chapter within a larger collected work. See~\cite{bender2021dangers}.

    \item[PhdThesis / MastersThesis] A graduate thesis. See~\cite{wang2019superglue}.
\end{description}

\newpage

\begin{thebibliography}{99}

\bibitem{vaswani2017attention}
A.~Vaswani, N.~Shazeer, N.~Parmar, J.~Uszkoreit, L.~Jones, A.~N.~Gomez, \L.~Kaiser, and I.~Polosukhin,
``Attention is all you need,''
\textit{Advances in Neural Information Processing Systems}, vol.~30, pp.~5998--6008, 2017.

\bibitem{lecun1998gradient}
Y.~LeCun, L.~Bottou, Y.~Bengio, and P.~Haffner,
``Gradient-based learning applied to document recognition,''
\textit{Proceedings of the IEEE}, vol.~86, no.~11, pp.~2278--2324, Nov.~1998.

\bibitem{rosenblatt1958perceptron}
F.~Rosenblatt,
``The perceptron: A probabilistic model for information storage and organization in the brain,''
\textit{Psychological Review}, vol.~65, no.~6, pp.~386--408, 1958.

\bibitem{bishop2006pattern}
C.~M.~Bishop,
\textit{Pattern Recognition and Machine Learning}.
New York: Springer, 2006.

\bibitem{goodfellow2016deep}
I.~Goodfellow, Y.~Bengio, and A.~Courville,
\textit{Deep Learning}.
Cambridge, MA: MIT Press, 2016.
Available: \url{https://www.deeplearningbook.org/}

\bibitem{brown2020language}
T.~Brown, B.~Mann, N.~Ryder, M.~Subbiah, J.~D.~Kaplan, P.~Dhariwal, A.~Neelakantan, P.~Shyam, G.~Sastry, A.~Askell, \textit{et~al.},
``Language models are few-shot learners,''
in \textit{Advances in Neural Information Processing Systems}, vol.~33, pp.~1877--1901, 2020.

\bibitem{dosovitskiy2021image}
A.~Dosovitskiy, L.~Beyer, A.~Kolesnikov, D.~Weissenborn, X.~Zhai, T.~Unterthiner, M.~Dehghani, M.~Minderer, G.~Heigold, S.~Gelly, J.~Uszkoreit, and N.~Houlsby,
``An image is worth 16x16 words: Transformers for image recognition at scale,''
in \textit{Proc.\ International Conference on Learning Representations (ICLR)}, 2021.

\bibitem{kaplan2020scaling}
J.~Kaplan, S.~McCandlish, T.~Henighan, T.~B.~Brown, B.~Chess, R.~Child, S.~Gray, A.~Radford, J.~Wu, and D.~Amodei,
``Scaling laws for neural language models,''
Tech.\ Rep., OpenAI, Jan.~2020.
Available: \url{https://arxiv.org/abs/2001.08361}

\bibitem{ouyang2022training}
L.~Ouyang, J.~Wu, X.~Jiang, D.~Almeida, C.~Wainwright, P.~Mishkin, C.~Zhang, S.~Agarwal, K.~Slama, A.~Ray, \textit{et~al.},
``Training language models to follow instructions with human feedback,''
preprint, arXiv:2203.02155, Mar.~2022.

\bibitem{kingma2015adam}
D.~P.~Kingma and J.~Ba,
``Adam: A method for stochastic optimization,''
in \textit{Proc.\ International Conference on Learning Representations (ICLR)}, San Diego, CA, 2015.

\bibitem{bender2021dangers}
E.~M.~Bender, T.~Gebru, A.~McMillan-Major, and S.~Shmitchell,
``On the dangers of stochastic parrots: Can language models be too big?''
in \textit{Proc.\ ACM Conference on Fairness, Accountability, and Transparency (FAccT)}, pp.~610--623, 2021.

\bibitem{wang2019superglue}
A.~Wang, Y.~Pruksachatkun, N.~Nangia, A.~Singh, J.~Michael, F.~Hill, O.~Levy, and S.~R.~Bowman,
``SuperGLUE: A stickier benchmark for general-purpose language understanding systems,''
in \textit{Advances in Neural Information Processing Systems}, vol.~32, 2019.

\end{thebibliography}

\end{document}
Bibby Mascot

PDF Preview

Create an account to compile and preview

Bibliography LaTeX Template | Free Download & Preview - Bibby