prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Macaw, provide a description of the model | **Macaw** is a generative question-answering (QA) system that is built on UnifiedQA, itself built on [T5](https://paperswithcode.com/method/t5). Macaw has three interesting features. First, it often produces high-quality answers to questions far outside the domain it was trained on, sometimes surprisingly so. Second, Macaw allows different permutations (“an gles”) of inputs and outputs to be used. For example, we can give it a question and get an answer; or give it an answer and get a question; or give it a question and answer and get a set of multiple-choice (MC) options for that question. This multi-angle QA capability allows versatility in the way Macaw can be used, include recursively using outputs as new inputs to the system. Finally, Macaw also generates explanations as an optional output (or even input) element. |
Given the following machine learning model name: Tanh Activation, provide a description of the model | **Tanh Activation** is an activation function used for neural networks:
$$f\left(x\right) = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$
Historically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.
Image Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng) |
Given the following machine learning model name: Non-linear Independent Component Estimation, provide a description of the model | **NICE**, or **Non-Linear Independent Components Estimation** is a framework for modeling complex high-dimensional densities. It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. The transformation is parameterised so that computing the determinant of the Jacobian and inverse Jacobian is trivial, yet it maintains the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood. The transformation used in NICE is the [affine coupling](https://paperswithcode.com/method/affine-coupling) layer without the scale term, known as additive coupling layer:
$$ y\_{I\_{2}} = x\_{I\_{2}} + m\left(x\_{I\_{1}}\right) $$
$$ x\_{I\_{2}} = y\_{I\_{2}} + m\left(y\_{I\_{1}}\right) $$ |
Given the following machine learning model name: Detailed Expression Capture and Animation, provide a description of the model | **Detailed Expression Capture and Animation**, or **DECA**, is a model for 3D face reconstruction that is trained to robustly produce a UV displacement map from a low-dimensional latent representation that consists of person-specific detail parameters and generic expression parameters, while a regressor is trained to predict detail, shape, albedo, expression, pose and illumination parameters from a single image. A detail-consistency loss is used to disentangle person-specific details and expression-dependent wrinkles. This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged. |
Given the following machine learning model name: GFP-GAN, provide a description of the model | **GFP-GAN** is a generative adversarial network for blind face restoration that leverages a generative facial prior (GFP). This Generative Facial Prior (GFP) is incorporated into the face restoration process via channel-split spatial feature transform layers, which allow for a good balance between realness and fidelity. As a whole, the GFP-GAN consists of a degradation removal module ([U-Net](https://paperswithcode.com/method/u-net)) and a pretrained face [StyleGAN](https://paperswithcode.com/method/stylegan) as a facial prior. They are bridged by a latent code mapping and several Channel-Split [Spatial Feature Transform](https://paperswithcode.com/method/spatial-feature-transform) (CS-SFT) layers. During training, 1) intermediate restoration losses are employed to remove complex degradation, 2) Facial component loss with discriminators is used to enhance facial details, and 3) identity preserving loss is used to retain face identity. |
Given the following machine learning model name: IoU-Balanced Sampling, provide a description of the model | **IoU-Balanced Sampling** is hard mining method for object detection. Suppose we need to sample $N$ negative samples from $M$ corresponding candidates. The selected probability for each sample under random sampling is:
$$ p = \frac{N}{M} $$
To raise the selected probability of hard negatives, we evenly split the sampling interval into $K$ bins according to IoU. $N$ demanded negative samples are equally distributed to each bin. Then we select samples from them uniformly. Therefore, we get the selected probability under IoU-balanced sampling:
$$ p\_{k} = \frac{N}{K}*\frac{1}{M\_{k}}\text{ , } k\in\left[0, K\right)$$
where $M\_{k}$ is the number of sampling candidates in the corresponding interval denoted by $k$. $K$ is set to 3 by default in our experiments.
The sampled histogram with IoU-balanced sampling is shown by green color in the Figure to the right. The IoU-balanced sampling can guide the distribution of training samples close to the one of hard negatives. |
Given the following machine learning model name: TinaFace, provide a description of the model | **TinaFace** is a type of face detection method that is based on generic object detection. It consists of (a) Feature Extractor: [ResNet](https://paperswithcode.com/method/resnet)-50 and 6 level [Feature Pyramid Network](https://www.paperswithcode.com/method/fpn) to extract the multi-scale features of input image; (b) an Inception block to enhance receptive field; (c) Classification Head: 5 layers [FCN](https://paperswithcode.com/method/fcn) for classification of anchors; (d) Regression Head: 5 layers [FCN](https://paperswithcode.com/method/fcn) for regression of anchors to ground-truth objects boxes; (e) IoU Aware Head: a single convolutional layer for IoU prediction. |
Given the following machine learning model name: Four-dimensional A-star, provide a description of the model | The aim of 4D A* is to find the shortest path between two four-dimensional (4D) nodes of a 4D search space - a starting node and a target node - as long as there is a path. It achieves both optimality and completeness. The former is because the path is shortest possible, and the latter because if the solution exists the algorithm is guaranteed to find it. |
Given the following machine learning model name: HS-ResNet, provide a description of the model | **HS-ResNet** is a [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks) that employs [Hierarchical-Split Block](https://paperswithcode.com/method/hierarchical-split-block) as its central building block within a [ResNet](https://paperswithcode.com/method/resnet)-like architecture. |
Given the following machine learning model name: BiGG, provide a description of the model | **BiGG** is an autoregressive model for generative modeling for sparse graphs. It utilizes sparsity to avoid generating the full adjacency matrix, and reduces the graph generation time complexity to $O(((n + m)\log n)$. Furthermore, during training this autoregressive model can be parallelized with $O(\log n)$ synchronization stages, which makes it much more efficient than other autoregressive models that require $\Omega(n)$. The approach is based on three key elements: (1) an $O(\log n)$ process for generating each edge using a binary tree data structure, inspired by R-MAT; (2) a tree-structured autoregressive model for generating the set of edges associated with each node; and (3) an autoregressive model defined over the sequence of nodes. |
Given the following machine learning model name: Hamburger, provide a description of the model | **Hamburger** is a global context module that employs matrix decomposition to factorize the learned representation into sub-matrices so as to recover the clean low-rank signal subspace. The key idea is, if we formulate the inductive bias like the global context into an objective function, the optimization algorithm to minimize the objective function can construct a computational graph, i.e., the architecture we need in the networks. |
Given the following machine learning model name: Implicit PointRend, provide a description of the model | **Implicit PointRend** is a modification to the [PointRend](https://paperswithcode.com/method/pointrend) module for instance segmentation. Instead of a coarse mask prediction used in [PointRend](https://paperswithcode.com/method/pointrend) to provide region-level context to distinguish objects, for each object Implicit PointRend generates different parameters for a function that makes the final pointwise mask prediction. The new model is more straightforward than PointRend: (1) it does not require an importance point sampling during training and (2) it uses a single point-level mask loss instead of two mask losses. Implicit PointRend can be trained directly with point supervision without any intermediate prediction interpolation steps. |
Given the following machine learning model name: SNet, provide a description of the model | **SNet** is a convolutional neural network architecture and object detection backbone used for the [ThunderNet](https://paperswithcode.com/method/thundernet) two-stage object detector. SNet uses ShuffleNetV2 basic blocks but replaces all 3×3 depthwise convolutions with 5×5 depthwise convolutions. |
Given the following machine learning model name: Adaptive Label Smoothing, provide a description of the model | |
Given the following machine learning model name: Charformer, provide a description of the model | **Charformer** is a type of [Transformer](https://paperswithcode.com/methods/category/transformers) model that learns a subword tokenization end-to-end as part of the model. Specifically it uses [GBST](https://paperswithcode.com/method/gradient-based-subword-tokenization) that automatically learns latent subword representations from characters in a data-driven fashion. Following GBST, the soft subword sequence is passed through [Transformer](https://paperswithcode.com/method/transformer) layers. |
Given the following machine learning model name: Relativistic GAN, provide a description of the model | A **Relativistic GAN** is a type of generative adversarial network. It has a relativistic discriminator which estimates the probability that the given real data is more realistic than a randomly sampled fake data. The idea is to endow GANs with the property that the probability of real data being real ($D\left(x\_{r}\right)$) should decrease as the probability of fake data being real ($D\left(x\_{f}\right)$) increases.
With a standard [GAN](https://paperswithcode.com/method/gan), we can achieve this as follows. The standard GAN discriminator can be defined, in term of the non-transformed layer $C\left(x\right)$, as $D\left(x\right) = \text{sigmoid}\left(C\left(x\right)\right)$. A simple way to make discriminator relativistic - having the output of $D$ depend on both real and fake data - is to sample from real/fake data pairs $\tilde{x} = \left(x\_{r}, x\_{f}\right)$ and define it as $D\left(\tilde{x}\right) = \text{sigmoid}\left(C\left(x\_{r}\right) − C\left(x\_{f}\right)\right)$. The modification can be interpreted as: the discriminator estimates the probability
that the given real data is more realistic than a randomly sampled fake data.
More generally a Relativistic GAN can be interpreted as having a discriminator of the form $a\left(C\left(x\_{r}\right)−C\left(x\_{f}\right)\right)$, where $a$ is the activation function, to be relativistic. |
Given the following machine learning model name: Fast Voxel Query, provide a description of the model | **Fast Voxel Query** is a module used in the [Voxel Transformer](https://paperswithcode.com/method/votr) 3D object detection model implementation of self-attention, specifically Local and Dilated Attention. For each querying index $v\_{i}$, an attending voxel index $v\_{j}$ is determined by Local and Dilated Attention. Then we can lookup the non-empty index $j$ in the hash table with hashed $v\_{j}$ as the key. Finally, the non-empty index $j$ is used to gather the attending feature $f\_{j}$ from $\mathcal{F}$ for [multi-head attention](https://paperswithcode.com/method/multi-head-attention). |
Given the following machine learning model name: Jukebox, provide a description of the model | **Jukebox** is a model that generates music with singing in the raw audio domain. It tackles the long context of raw audio using a multi-scale [VQ-VAE](https://paperswithcode.com/method/vq-vae) to compress it to discrete codes, and modeling those using [autoregressive Transformers](https://paperswithcode.com/methods/category/autoregressive-transformers). It can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable.
Three separate VQ-VAE models are trained with different temporal resolutions. At each level, the input audio is segmented and encoded into latent vectors $\mathbf{h}\_{t}$, which are then quantized to the closest codebook vectors $\mathbf{e}\_{z\_{t}}$. The code $z\_{t}$ is a discrete representation of the audio that we later train our prior on. The decoder takes the sequence of codebook vectors and reconstructs the audio. The top level learns the highest degree of abstraction, since it is encoding longer audio per token while keeping the codebook size the same. Audio can be reconstructed using the codes at any one of the abstraction levels, where the least abstract bottom-level codes result in the highest-quality audio. |
Given the following machine learning model name: Pyramid Pooling Module, provide a description of the model | A **Pyramid Pooling Module** is a module for semantic segmentation which acts as an effective global contextual prior. The motivation is that the problem of using a convolutional network like a [ResNet](https://paperswithcode.com/method/resnet) is that, while the receptive field is already larger than the input image, the empirical receptive field is much smaller than the theoretical one especially on high-level layers. This makes many networks not sufficiently incorporate the momentous global scenery prior.
The PPM is an effective global prior representation that addresses this problem. It contains information with different scales and varying among different sub-regions. Using our 4-level pyramid, the pooling kernels cover the whole, half of, and small portions of the image. They are fused as the global prior. Then we concatenate the prior with the original feature map in the final part. |
Given the following machine learning model name: WaveGrad, provide a description of the model | **WaveGrad** is a conditional model for waveform generation through estimating gradients of the data density. This model is built on the prior work on score matching and diffusion probabilistic models. It starts from Gaussian white noise and iteratively refines the signal via a gradient-based sampler conditioned on the mel-spectrogram. WaveGrad is non-autoregressive, and requires only a constant number of generation steps during inference. It can use as few as 6 iterations to generate high fidelity audio samples. |
Given the following machine learning model name: Instance Normalization, provide a description of the model | **Instance Normalization** (also known as contrast normalization) is a normalization layer where:
$$
y_{tijk} = \frac{x_{tijk} - \mu_{ti}}{\sqrt{\sigma_{ti}^2 + \epsilon}},
\quad
\mu_{ti} = \frac{1}{HW}\sum_{l=1}^W \sum_{m=1}^H x_{tilm},
\quad
\sigma_{ti}^2 = \frac{1}{HW}\sum_{l=1}^W \sum_{m=1}^H (x_{tilm} - \mu_{ti})^2.
$$
This prevents instance-specific mean and covariance shift simplifying the learning process. Intuitively, the normalization process allows to remove instance-specific contrast information from the content image in a task like image stylization, which simplifies generation. |
Given the following machine learning model name: DeepCluster, provide a description of the model | **DeepCluster** is a self-supervision approach for learning image representations. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update
the weights of the network |
Given the following machine learning model name: Stochastic Regularized Majorization-Minimization, provide a description of the model | |
Given the following machine learning model name: Eligibility Trace, provide a description of the model | An **Eligibility Trace** is a memory vector $\textbf{z}\_{t} \in \mathbb{R}^{d}$ that parallels the long-term weight vector $\textbf{w}\_{t} \in \mathbb{R}^{d}$. The idea is that when a component of $\textbf{w}\_{t}$ participates in producing an estimated value, the corresponding component of $\textbf{z}\_{t}$ is bumped up and then begins to fade away. Learning will then occur in that component of $\textbf{w}\_{t}$ if a nonzero TD error occurs before the trade falls back to zero. The trace-decay parameter $\lambda \in \left[0, 1\right]$ determines the rate at which the trace falls.
Intuitively, they tackle the credit assignment problem by capturing both a frequency heuristic - states that are visited more often deserve more credit - and a recency heuristic - states that are visited more recently deserve more credit.
$$E\_{0}\left(s\right) = 0 $$
$$E\_{t}\left(s\right) = \gamma\lambda{E}\_{t-1}\left(s\right) + \textbf{1}\left(S\_{t} = s\right) $$
Source: Sutton and Barto, Reinforcement Learning, 2nd Edition |
Given the following machine learning model name: TAPAS, provide a description of the model | **TAPAS** is a weakly supervised question answering model that reasons over tables without generating logical forms. TAPAS predicts a minimal program by selecting a subset of the table cells and a possible aggregation operation to be executed on top of them. Consequently, TAPAS can learn operations from natural language, without the need to specify them in some formalism. This is implemented by extending [BERT](https://paperswithcode.com/method/bert)’s architecture with additional embeddings that capture tabular structure, and with two classification layers for selecting cells and predicting a corresponding aggregation operator. |
Given the following machine learning model name: Dynamic Algorithm Configuration, provide a description of the model | Dynamic algorithm configuration (DAC) is capable of generalizing over prior optimization approaches, as well as handling optimization of hyperparameters that need to be adjusted over multiple time-steps.
Image Source: [Biedenkapp et al.](http://ecai2020.eu/papers/1237_paper.pdf) |
Given the following machine learning model name: Direct Feedback Alignment, provide a description of the model | |
Given the following machine learning model name: Forward gradient, provide a description of the model | Forward gradients are unbiased estimators of the gradient $\nabla f(\theta)$ for a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$, given by $g(\theta) = \langle \nabla f(\theta) , v \rangle v$.
Here $v = (v_1, \ldots, v_n)$ is a random vector, which must satisfy the following conditions in order for $g(\theta)$ to be an unbiased estimator of $\nabla f(\theta)$
* $v_i \perp v_j$ for all $i \neq j$
* $\mathbb{E}[v_i] = 0$ for all $i$
* $\mathbb{V}[v_i] = 1$ for all $i$
Forward gradients can be computed with a single jvp (Jacobian Vector Product), which enables the use of the forward mode of autodifferentiation instead of the usual reverse mode, which has worse computational characteristics. |
Given the following machine learning model name: Variational Trace Distance Estimation, provide a description of the model | **Variational Trace Distance Estimation**, or **VTDE**, is a variational algorithm for trace norm estimation that only involves one ancillary qubit. Notably, the cost function in VTDE gathers information from a single-qubit observable and thus could avoid the barren plateau issue with logarithmic depth parameterized circuits. |
Given the following machine learning model name: MNN, provide a description of the model | **Mobile Neural Network (MNN)** is a mobile inference engine tailored to mobile applications. The contributions of MNN include: (1) presenting a mechanism called pre-inference that manages to conduct runtime optimization; (2) delivering thorough kernel optimization on operators to achieve optimal computation performance; (3) introducing backend abstraction module which enables hybrid scheduling and keeps the engine lightweight. |
Given the following machine learning model name: Adversarially Learned Inference, provide a description of the model | **Adversarially Learned Inference (ALI)** is a generative modelling approach that casts the learning of both an inference machine (or encoder) and a deep directed generative model (or decoder) in an GAN-like adversarial framework. A discriminator is trained to discriminate joint samples of the data and the corresponding latent variable from the encoder (or approximate posterior) from joint samples from the decoder while in opposition, the encoder and the decoder are trained together to fool the discriminator. Not is the discriminator asked to distinguish synthetic samples from real data, but it is required it to distinguish between two joint distributions over the data space and the latent variables.
An ALI differs from a [GAN](https://paperswithcode.com/method/gan) in two ways:
- The generator has two components: the encoder, $G\_{z}\left(\mathbf{x}\right)$, which maps data samples $x$ to $z$-space, and the decoder $G\_{x}\left(\mathbf{z}\right)$, which maps samples from the prior $p\left(\mathbf{z}\right)$ (a source of noise) to the input space.
- The discriminator is trained to distinguish between joint pairs $\left(\mathbf{x}, \tilde{\mathbf{z}} = G\_{\mathbf{x}}\left(\mathbf{x}\right)\right)$ and $\left(\tilde{\mathbf{x}} =
G\_{x}\left(\mathbf{z}\right), \mathbf{z}\right)$, as opposed to marginal samples $\mathbf{x} \sim q\left(\mathbf{x}\right)$ and $\tilde{\mathbf{x}} ∼ p\left(\mathbf{x}\right)$. |
Given the following machine learning model name: COLA, provide a description of the model | **COLA** is a self-supervised pre-training approach for learning a general-purpose representation of audio. It is based on contrastive learning: it learns a representation which assigns high similarity to audio segments extracted from the same recording while assigning lower similarity to segments from different recordings. |
Given the following machine learning model name: Shifted Softplus, provide a description of the model | **Shifted Softplus** is an activation function ${\rm ssp}(x) = \ln( 0.5 e^{x} + 0.5 )$, which [SchNet](https://paperswithcode.com/method/schnet) employs as non-linearity throughout the network in order to obtain a smooth potential energy surface. The shifting ensures that ${\rm ssp}(0) = 0$ and improves the convergence of the network. This activation function shows similarity to ELUs, while having infinite order of continuity. |
Given the following machine learning model name: Local Patch Interaction, provide a description of the model | **Local Patch Interaction**, or **LPI**, is a module used for the [XCiT layer](https://paperswithcode.com/method/xcit-layer) to enable explicit communication across patches. LPI consists of two [depth-wise 3×3 convolutional layers](https://paperswithcode.com/method/depthwise-convolution) with [Batch Normalization](https://paperswithcode.com/method/batch-normalization) and [GELU](https://paperswithcode.com/method/gelu) non-linearity in between. Due to its depth-wise structure, the LPI block has a negligible overhead in terms of parameters, as well as a limited overhead in terms of throughput and memory usage during inference. |
Given the following machine learning model name: Feedback Memory, provide a description of the model | **Feedback Memory** is a type of attention module used in the [Feedback Transformer](https://paperswithcode.com/method/feedback-transformer) architecture. It allows a [transformer](https://paperswithcode.com/method/transformer) to to use the most abstract representations from the past directly as inputs for the current timestep. This means that the model does not form its representation in parallel, but sequentially token by token. More precisely, we replace the context inputs to attention modules with memory vectors that are computed over the past, i.e.:
$$ \mathbf{z}^{l}\_{t} = \text{Attn}\left(\mathbf{x}^{l}\_{t}, \left[\mathbf{m}\_{t−\tau}, \dots, \mathbf{m}\_{t−1}\right]\right) $$
where a memory vector $\mathbf{m}\_{t}$ is computed by summing the representations of each layer at the $t$-th time step:
$$ \mathbf{m}\_{t} = \sum^{L}\_{l=0}\text{Softmax}\left(w^{l}\right)\mathbf{x}\_{t}^{l} $$
where $w^{l}$ are learnable scalar parameters. Here $l = 0$ corresponds to token embeddings. The weighting of different layers by a [softmax](https://paperswithcode.com/method/softmax) output gives the model more flexibility as it can average them or select one of them. This modification of the self-attention input adapts the computation of the Transformer from parallel to sequential, summarized in the Figure. Indeed, it gives the ability to formulate the representation $\mathbf{x}^{l}\_{t+1}$ based on past representations from any layer $l'$, while in a standard Transformer this is only true for $l > l'$. This change can be viewed as exposing all previous computations to all future computations, providing better representations of the input. Such capacity would allow much shallower models to capture the same level of abstraction as a deeper architecture. |
Given the following machine learning model name: MobileDet, provide a description of the model | **MobileDet** is an object detection model developed for mobile accelerators. MobileDets uses regular convolutions extensively on EdgeTPUs and DSPs, especially in the early stage of the network where depthwise convolutions tend to be less efficient. This helps boost the latency-accuracy trade-off for object detection on accelerators, provided that they are placed strategically in the network via [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). By incorporating regular convolutions in the search space and directly optimizing the network architectures for object detection, an efficient family of object detection models is obtained. |
Given the following machine learning model name: Softplus, provide a description of the model | **Softplus** is an activation function $f\left(x\right) = \log\left(1+\exp\left(x\right)\right)$. It can be viewed as a smooth version of [ReLU](https://paperswithcode.com/method/relu). |
Given the following machine learning model name: ThunderNet, provide a description of the model | **ThunderNet** is a two-stage object detection model. The design of ThunderNet aims at the computationally expensive structures in state-of-the-art two-stage detectors. The backbone utilises a [ShuffleNetV2](https://paperswithcode.com/method/shufflenet-v2) inspired network called [SNet](https://paperswithcode.com/method/snet) designed for object detection. In the detection part, ThunderNet follows the detection head design in Light-Head [R-CNN](https://paperswithcode.com/method/r-cnn), and further compresses the [RPN](https://paperswithcode.com/method/rpn) and R-CNN subnet. To eliminate the performance degradation induced by small backbones and small feature maps, ThunderNet uses two new efficient architecture blocks, [Context Enhancement Module](https://paperswithcode.com/method/context-enhancement-module) (CEM) and [Spatial Attention Module](https://paperswithcode.com/method/spatial-attention-module) (SAM). CEM combines the feature maps from multiple scales to leverage local and global context information, while SAM uses the information learned in RPN to refine the feature distribution in RoI warping. |
Given the following machine learning model name: LeNet, provide a description of the model | **LeNet** is a classic convolutional neural network employing the use of convolutions, pooling and fully connected layers. It was used for the handwritten digit recognition task with the MNIST dataset. The architectural design served as inspiration for future networks such as [AlexNet](https://paperswithcode.com/method/alexnet) and [VGG](https://paperswithcode.com/method/vgg)..
[code](https://github.com/Elman295/Paper_with_code/blob/main/LeNet_5_Pytorch.ipynb) |
Given the following machine learning model name: Point Gathering Network, provide a description of the model | **PGNet** is a point-gathering network for reading arbitrarily-shaped text in real-time. It is a single-shot text spotter, where the pixel-level character classification map is learned with proposed PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC decoder, we gather high-level character classification vectors from two-dimensional space and decode them into text symbols without NMS and RoI operations involved, which guarantees high efficiency. Additionally, reasoning the relations between each character and its neighbors, a graph refinement module (GRM) is proposed to optimize the coarse recognition and improve the end-to-end performance. |
Given the following machine learning model name: Large-scale spectral clustering, provide a description of the model | # [Spectral Clustering](https://paperswithcode.com/method/spectral-clustering)
Spectral clustering aims to partition the data points into $k$ clusters using the spectrum of the graph Laplacians
Given a dataset $X$ with $N$ data points, spectral clustering algorithm first constructs similarity matrix ${W}$, where ${w_{ij}}$ indicates the similarity between data points $x_i$ and $x_j$ via a similarity measure metric.
Let $L=D-W$, where $L$ is called graph Laplacian and ${D}$ is a diagonal matrix with $d_{ii} = \sum_ {j=1}^n w_{ij}$.
The objective function of spectral clustering can be formulated based on the graph Laplacian as follow:
\begin{equation}
\label{eq:SC_obj}
{\max_{{U}} \operatorname{tr}\left({U}^{T} {L} {U}\right)}, \\ {\text { s.t. } \quad {U}^{T} {{U}={I}}},
\end{equation}
where $\operatorname{tr(\cdot)}$ denotes the trace norm of a matrix.
The rows of matrix ${U}$ are the low dimensional embedding of the original data points.
Generally, spectral clustering computes ${U}$ as the bottom $k$ eigenvectors of ${L}$, and finally applies $k$-means on ${U}$ to obtain the clustering results.
# Large-scale Spectral Clustering
To capture the relationship between all data points in $X$, an $N\times N$ similarity matrix is needed to be constructed in conventional spectral clustering, which costs $O(N^2d)$ time and $O(N^2)$ memory and is not feasible for large-scale clustering tasks.
Instead of a full similarity matrix, many accelerated spectral clustering methods are using a similarity sub-matrix to represent each data points by the cross-similarity between data points and a set of representative data points (i.e., landmarks) via some similarity measures, as
\begin{equation}
\label{eq: cross-similarity}
B = \Phi(X,R),
\end{equation}
where $R = \{r_1,r_2,\dots, r_p \}$ ($p \ll N$) is a set of landmarks with the same dimension to $X$, $\Phi(\cdot)$ indicate a similarity measure metric, and $B\in \mathbb{R}^{N\times p}$ is the similarity sub-matrix to represent the $X \in \mathbb{R}^{N\times d}$ with respect to the $R\in \mathbb{R}^{p\times d}$.
For large-scale spectral clustering using such similarity matrix,
a symmetric similarity matrix $W$ can be designed as
\begin{equation}
\label{eq: WusedB }
W=\left[\begin{array}{ll}
\mathbf{0} & B ; \\
B^{T} & \mathbf{0}
\end{array}\right].
\end{equation}
The size of matrix $W$ is $(N+p)\times (N+p)$.
Taking the advantage of the bipartite structure, some fast eigen-decomposition methods can then be used to obtain the spectral embedding.
Finally, $k$-means is conducted on the embedding to obtain clustering results.
The clustering result is directly related to the quality of $B$ that consists of the similarities between data points and landmarks.
Thus, the performance of landmark selection is crucial to the clustering result. |
Given the following machine learning model name: 1-bit LAMB, provide a description of the model | **1-bit LAMB** is a communication-efficient stochastic optimization technique which introduces a novel way to support adaptive layerwise learning rates even when communication is compressed. Learning from the insights behind [1-bit Adam](https://paperswithcode.com/method/1-bit-adam), it is a a 2-stage algorithm which uses [LAMB](https://paperswithcode.com/method/lamb) (warmup stage) to “pre-condition” a communication compressed momentum SGD algorithm (compression stage). At compression stage where original LAMB algorithm cannot be used to update the layerwise learning rates, 1-bit LAMB employs a novel way to adaptively scale layerwise learning rates based on information from both warmup and compression stages. As a result, 1-bit LAMB is able to achieve large batch optimization (LAMB)’s convergence speed under compressed communication.
There are two major differences between 1-bit LAMB and the original LAMB:
- During compression stage, 1-bit LAMB updates the layerwise learning rate based on a novel “reconstructed gradient” based on the compressed momentum. This makes 1-bit LAMB compatible with error compensation and be able to keep track of the training dynamic under compression.
- 1-bit LAMB also introduces extra stabilized soft thresholds when updating layerwise learning rate at compression stage, which makes training more stable under compression. |
Given the following machine learning model name: MoGA-A, provide a description of the model | **MoGA-A** is a convolutional neural network optimized for mobile latency and discovered via Mobile GPU-Aware (MoGA) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building block is MBConvs (inverted residual blocks) from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2). Squeeze-and-excitation layers are also experimented with. |
Given the following machine learning model name: Stacked Hourglass Network, provide a description of the model | **Stacked Hourglass Networks** are a type of convolutional neural network for pose estimation. They are based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. |
Given the following machine learning model name: Dual Multimodal Attention, provide a description of the model | In image inpainting task, the mechanism extracts complementary features from the word embedding in two paths by reciprocal attention, which is done by comparing the descriptive text and complementary image areas through reciprocal attention. |
Given the following machine learning model name: Adaptive Spline Activation Function, provide a description of the model | Stefano Guarnieri, Francesco Piazza, and Aurelio Uncini
"Multilayer Feedforward Networks with Adaptive Spline Activation Function,"
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999
Abstract — In this paper, a new adaptive spline activation function neural network (ASNN) is presented. Due to the ASNN’s high representation capabilities, networks with a small number of interconnections can be trained to solve both pattern recognition and data processing real-time problems. The main idea is to use a Catmull–Rom cubic spline as the neuron’s activation function, which ensures a simple structure suitable for both software and hardware implementation. Experimental results demonstrate improvements in terms of generalization capability
and of learning speed in both pattern recognition and data processing tasks.
Index Terms— Adaptive activation functions, function shape autotuning, generalization, generalized sigmoidal functions, multilayer
perceptron, neural networks, spline neural networks. |
Given the following machine learning model name: Effective Squeeze-and-Excitation Block, provide a description of the model | **Effective Squeeze-and-Excitation Block** is an image model block based on squeeze-and-excitation, the difference being that one less FC layer is used. The authors note the SE module has a limitation: channel information loss due to dimension reduction. For avoiding high model complexity burden, two FC layers of the SE module need to reduce channel dimension. Specifically, while the first FC layer reduces input feature channels $C$ to $C/r$ using reduction ratio $r$, the second FC layer expands the reduced channels to original channel size $C$. As a result, this channel dimension reduction causes channel information loss. Therefore, effective SE (eSE) uses only one FC layer with $C$ channels instead of two FCs without channel dimension reduction, which maintains channel information. |
Given the following machine learning model name: SSD, provide a description of the model | **SSD** is a single-stage object detection method that discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. Improvements over competing single-stage methods include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. |
Given the following machine learning model name: Rank Flow Embedding, provide a description of the model | |
Given the following machine learning model name: Meta-augmentation, provide a description of the model | **Meta-augmentation** helps generate more varied tasks for a single example in meta-learning. It can be distinguished from data augmentation in classic machine learning as follows. For data augmentation in classical machine learning, the aim is to generate more varied examples, within a single task. Meta-augmentation has the exact opposite aim: we wish to generate more varied tasks,
for a single example, to force the learner to quickly learn a new task from feedback. In meta-augmentation, adding randomness discourages the base learner and model from learning trivial solutions that do not generalize to new tasks. |
Given the following machine learning model name: SRU++, provide a description of the model | **SRU++** is a self-attentive recurrent unit that combines fast recurrence and attention for sequence modeling, extending the [SRU](https://www.paperswithcode.com/method/sru) unit. The key modification of SRU++ is to incorporate more expressive non-linear operations into the recurrent network. Specifically, given the input sequence represented as a matrix $\mathbf{X} \in \mathbb{R}^{L \times d}$, the attention component computes the query, key and value representations using the following multiplications,
$$
\mathbf{Q} =\mathbf{W}^{q} \mathbf{X}^{\top}
$$
$$
\mathbf{K} =\mathbf{W}^{k} \mathbf{Q} \\
$$
$$
\mathbf{V} =\mathbf{W}^{v} \mathbf{Q}
$$
where $\mathbf{W}^{q} \in \mathbb{R}^{d^{\prime} \times d}, \mathbf{W}^{k}, \mathbf{W}^{v} \in \mathbb{R}^{d^{\prime} \times d^{\prime}}$ are model parameters. $d^{\prime}$ is the attention dimension that is typically much smaller than $d$. Note that the keys $\mathbf{K}$ and values $\mathbf{V}$ are computed using $\mathbf{Q}$ instead of $\mathbf{X}$ such that the weight matrices $\mathbf{W}^{k}$ and $\mathbf{W}^{v}$ are significantly smaller.
Next, we compute a weighted average output $\mathbf{A} \in \mathbb{R}^{d^{\prime} \times L}$ using [scaled dot-product attention](https://paperswithcode.com/method/scaled):
$$
\mathbf{A}^{\top}=\operatorname{softmax}\left(\frac{\mathbf{Q}^{\top} \mathbf{K}}{\sqrt{d^{\prime}}}\right) \mathbf{V}^{\top}
$$
The final output $U$ required by the elementwise recurrence is obtained by another linear projection,
$$
\mathbf{U}^{\top}=\mathbf{W}^{o}(\mathbf{Q}+\alpha \cdot \mathbf{A})
$$
where $\alpha \in \mathbb{R}$ is a learned scalar and $\mathbf{W}\_{o} \in \mathbb{R}^{3 d \times d^{\prime}}$ is a parameter matrix. $\mathbf{Q}+\alpha \cdot \mathbf{A}$ is a [residual connection](https://paperswithcode.com/method/residual-connection) which improves gradient propagation and stabilizes training. We initialize $\alpha$ to zero and as a result,
$$
\mathbf{U}^{\top}=\mathbf{W}^{o} \mathbf{Q}=\left(\mathbf{W}^{o} \mathbf{W}^{q}\right) \mathbf{X}^{\top}
$$
initially falls back to a linear transformation of the input $X$ skipping the attention transformation. Intuitively, skipping attention encourages leveraging recurrence to capture sequential patterns during early stage of training. As $|\alpha|$ grows, the attention mechanism can learn long-range dependencies for the model. In addition, $\mathbf{W}^{o} \mathbf{W}^{q}$ can be interpreted as applying a matrix factorization trick with a small inner dimension $d^{\prime}<d$, reducing the total number of parameters. The Figure compares the differences of SRU, SRU with this factorization trick (but without attention), and SRU++.
The last modification is adding [layer normalization](https://paperswithcode.com/method/layer-normalization) to each SRU++ layer. We apply normalization after the attention operation and before the matrix multiplication with $\mathbf{W}^{o}$
$$
\mathbf{U}^{\top}=\mathbf{W}^{o} \operatorname{layernorm}(\mathbf{Q}+\alpha \cdot \mathbf{A})
$$
This implementation is post-layer normalization in which the normalization is added after the residual connection. |
Given the following machine learning model name: Global Convolutional Network, provide a description of the model | A **Global Convolutional Network**, or **GCN**, is a semantic segmentation building block that utilizes a large kernel to help perform classification and localization tasks simultaneously. It can be used in a [FCN](https://paperswithcode.com/method/fcn)-like structure, where the [GCN](https://paperswithcode.com/method/gcn) is used to generate semantic score maps. Instead of directly using larger kernels or global [convolution](https://paperswithcode.com/method/convolution), the GCN module employs a combination of $1 \times k + k \times 1$ and $k \times 1 + 1 \times k$ convolutions, which enables [dense connections](https://paperswithcode.com/method/dense-connections) within a large
$k\times{k}$ region in the feature map |
Given the following machine learning model name: Recurrent Entity Network, provide a description of the model | The **Recurrent Entity Network** is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a [Memory Network](https://paperswithcode.com/method/memory-network). Like a [Neural Turing Machine](https://paperswithcode.com/method/neural-turing-machine) or Differentiable Neural Computer, it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously.
The model consists of a fixed number of dynamic memory cells, each containing a vector key $w_j$ and a vector value (or content) $h_j$. Each cell is associated with its own processor, a simple gated recurrent network that may update the cell value given an input. If each cell learns to represent a concept or entity in the world, one can imagine a gating mechanism that, based on the key and content of the memory cells, will only modify the cells that concern the entities mentioned in the input. There is no direct interaction between the memory cells, hence the system can be seen as multiple identical processors functioning in parallel, with distributed local memory.
The sharing of these parameters reflects an invariance of these laws across object instances, similarly to how the [weight tying](https://paperswithcode.com/method/weight-tying) scheme in a CNN reflects an invariance of image statistics across locations. Their hidden state is updated only when new information relevant to their concept is received, and remains otherwise unchanged. The keys used in the addressing/gating mechanism also correspond to concepts or entities, but are modified only during learning, not during inference. |
Given the following machine learning model name: Fixed Factorized Attention, provide a description of the model | **Fixed Factorized Attention** is a factorized attention pattern where specific cells summarize previous locations and propagate that information to all future cells. It was proposed as part of the [Sparse Transformer](https://paperswithcode.com/method/sparse-transformer) architecture.
A self-attention layer maps a matrix of input embeddings $X$ to an output matrix and is parameterized by a connectivity pattern $S = \text{set}\left(S\_{1}, \dots, S\_{n}\right)$, where $S\_{i}$ denotes the set of indices of the input vectors to which the $i$th output vector attends. The output vector is a weighted sum of transformations of the input vectors:
$$ \text{Attend}\left(X, S\right) = \left(a\left(\mathbf{x}\_{i}, S\_{i}\right)\right)\_{i\in\text{set}\left(1,\dots,n\right)}$$
$$ a\left(\mathbf{x}\_{i}, S\_{i}\right) = \text{softmax}\left(\frac{\left(W\_{q}\mathbf{x}\_{i}\right)K^{T}\_{S\_{i}}}{\sqrt{d}}\right)V\_{S\_{i}} $$
$$ K\_{Si} = \left(W\_{k}\mathbf{x}\_{j}\right)\_{j\in{S\_{i}}} $$
$$ V\_{Si} = \left(W\_{v}\mathbf{x}\_{j}\right)\_{j\in{S\_{i}}} $$
Here $W\_{q}$, $W\_{k}$, and $W\_{v}$ represent the weight matrices which transform a given $x\_{i}$ into a query, key, or value, and $d$ is the inner dimension of the queries and keys. The output at each position is a sum of the values weighted by the scaled dot-product similarity of the keys and queries.
Full self-attention for autoregressive models defines $S\_{i} = \text{set}\left(j : j \leq i\right)$, allowing every element to attend to all previous positions and its own position.
Factorized self-attention instead has $p$ separate attention heads, where the $m$th head defines a subset of the indices $A\_{i}^{(m)} ⊂ \text{set}\left(j : j \leq i\right)$ and lets $S\_{i} = A\_{i}^{(m)}$. The goal with the Sparse [Transformer](https://paperswithcode.com/method/transformer) was to find efficient choices for the subset $A$.
Formally for Fixed Factorized Attention, $A^{(1)}\_{i} = ${$j : \left(\lfloor{j/l\rfloor}=\lfloor{i/l\rfloor}\right)$}, where the brackets denote the floor operation, and $A^{(2)}\_{i} = ${$j : j \mod l \in ${$t, t+1, \ldots, l$}}, where $t=l-c$ and $c$ is a hyperparameter. The $i$-th output vector of the attention head attends to all input vectors either from $A^{(1)}\_{i}$ or $A^{(2)}\_{i}$. This pattern can be visualized in the figure to the right.
If the stride is 128 and $c = 8$, then all future positions greater than 128 can attend to positions 120-128, all positions greater than 256 can attend to 248-256, and so forth.
A fixed-attention pattern with $c = 1$ limits the expressivity of the network significantly, as many representations in the network are only used for one block whereas a small number of locations are used by all blocks. The authors found choosing $c \in ${$8, 16, 32$} for typical values of $l \in
{128, 256}$ performs well, although this increases the computational cost of this method by $c$ in comparison to the [strided attention](https://paperswithcode.com/method/strided-attention).
Additionally, the authors found that when using multiple heads, having them attend to distinct subblocks of length $c$ within the block of size $l$ was preferable to having them attend to the same subblock. |
Given the following machine learning model name: End-to-End Neural Diarization, provide a description of the model | **End-to-End Neural Diarization** is a neural network for speaker diarization in which a neural network directly outputs speaker diarization results given a multi-speaker recording. To realize such an end-to-end model, the speaker diarization problem is formulated as a multi-label classification problem and a permutation-free objective function is introduced to directly minimize diarization errors. The EEND method can explicitly handle speaker overlaps during training and inference. Just by feeding multi-speaker recordings with corresponding speaker segment labels, the model can be adapted to real conversations. |
Given the following machine learning model name: ESPNet, provide a description of the model | **ESPNet** is a convolutional neural network for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a convolutional module, efficient spatial pyramid ([ESP](https://paperswithcode.com/method/esp)), which is efficient in terms of computation, memory, and power. |
Given the following machine learning model name: Network Embedding as Matrix Factorization:, provide a description of the model | |
Given the following machine learning model name: SpatialDropout, provide a description of the model | **SpatialDropout** is a type of [dropout](https://paperswithcode.com/method/dropout) for convolutional networks. For a given [convolution](https://paperswithcode.com/method/convolution) feature tensor of size $n\_{\text{feats}}$×height×width, we perform only $n\_{\text{feats}}$ dropout
trials and extend the dropout value across the entire feature map. Therefore, adjacent pixels in the dropped-out feature
map are either all 0 (dropped-out) or all active as illustrated in the figure to the right. |
Given the following machine learning model name: Gradual Self-Training, provide a description of the model | Gradual self-training is a method for semi-supervised domain adaptation. The goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
This comes up for example in applications ranging from sensor networks and self-driving car perception modules to brain-machine interfaces, where machine learning systems must adapt to data distributions that evolve over time.
The gradual self-training algorithm begins with a classifier $w_0$ trained on labeled examples from the source domain (Figure a). For each successive domain $P_t$, the algorithm generates pseudolabels for unlabeled examples from that domain, and then trains a regularized supervised classifier on the pseudolabeled examples. The intuition, visualized in the Figure, is that after a single gradual shift, most examples are pseudolabeled correctly so self-training learns a good classifier on the shifted data, but the shift from the source to the target can be too large for self-training to correct. |
Given the following machine learning model name: BigBird, provide a description of the model | **BigBird** is a [Transformer](https://paperswithcode.com/method/transformer) with a sparse attention mechanism that reduces the quadratic dependency of self-attention to linear in the number of tokens. BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. In particular, BigBird consists of three main parts:
- A set of $g$ global tokens attending on all parts of the sequence.
- All tokens attending to a set of $w$ local neighboring tokens.
- All tokens attending to a set of $r$ random tokens.
This leads to a high performing attention mechanism scaling to much longer sequence lengths (8x). |
Given the following machine learning model name: ALDEN, provide a description of the model | **ALDEN**, or **Active Learning with DivErse iNterpretations**, is an active learning approach for text classification. With local interpretations in DNNs, ALDEN identifies linearly separable regions of samples. Then, it selects samples according to their diversity of local interpretations and queries their labels.
Specifically, we first calculate the local interpretations in DNN for each sample as the gradient backpropagated from the final
predictions to the input features. Then, we use the most diverse interpretation of words in a sample to measure its diverseness. Accordingly, we select unlabeled samples with the maximally diverse interpretations for labeling and retrain the model with these
labeled samples. |
Given the following machine learning model name: Continuously Indexed Domain Adaptation, provide a description of the model | **Continuously Indexed Domain Adaptation** combines traditional adversarial adaptation with a novel discriminator that models the encoding-conditioned domain index distribution.
Image Source: [Wang et al.](https://arxiv.org/pdf/2007.01807v2.pdf) |
Given the following machine learning model name: RAdam, provide a description of the model | **Rectified Adam**, or **RAdam**, is a variant of the [Adam](https://paperswithcode.com/method/adam) stochastic optimizer that introduces a term to rectify the variance of the adaptive learning rate. It seeks to tackle the bad convergence problem suffered by Adam. The authors argue that the root cause of this behaviour is that the adaptive learning rate has undesirably large variance in the early stage of model training, due to the limited amount of training samples being used. Thus, to reduce such variance, it is better to use smaller learning rates in the first few epochs of training - which justifies the warmup heuristic. This heuristic motivates RAdam which rectifies the variance problem:
$$g\_{t} = \nabla\_{\theta}f\_{t}\left(\theta\_{t-1}\right) $$
$$v\_{t} = 1/\beta\_{2}v\_{t-1} + \left(1-\beta\_{2}\right)g^{2}\_{t} $$
$$m\_{t} = \beta\_{1}m\_{t-1} + \left(1-\beta\_{1}\right)g\_{t} $$
$$ \hat{m\_{t}} = m\_{t} / \left(1-\beta^{t}\_{1}\right) $$
$$ \rho\_{t} = \rho\_{\infty} - 2t\beta^{t}\_{2}/\left(1-\beta^{t}\_{2}\right) $$
$$\rho_{\infty} = \frac{2}{1-\beta_2} - 1$$
If the variance is tractable - $\rho\_{t} > 4$ then:
...the adaptive learning rate is computed as:
$$ l\_{t} = \sqrt{\left(1-\beta^{t}\_{2}\right)/v\_{t}}$$
...the variance rectification term is calculated as:
$$ r\_{t} = \sqrt{\frac{(\rho\_{t}-4)(\rho\_{t}-2)\rho\_{\infty}}{(\rho\_{\infty}-4)(\rho\_{\infty}-2)\rho\_{t}}}$$
...and we update parameters with adaptive momentum:
$$ \theta\_{t} = \theta\_{t-1} - \alpha\_{t}r\_{t}\hat{m}\_{t}l\_{t} $$
If the variance isn't tractable we update instead with:
$$ \theta\_{t} = \theta\_{t-1} - \alpha\_{t}\hat{m}\_{t} $$ |
Given the following machine learning model name: Intrinsically Motivated Goal Exploration Processes, provide a description of the model | Population-based intrinsically motivated goal exploration algorithms applied to real world robot learning of complex skills like tool use. |
Given the following machine learning model name: FixRes, provide a description of the model | **FixRes** is an image scaling strategy that seeks to optimize classifier performance. It is motivated by the observation that data augmentations induce a significant discrepancy between the size of the objects seen by the classifier at train and test time: in fact, a lower train resolution improves the classification at test time! FixRes is a simple strategy to optimize the classifier performance, that employs different train and test resolutions. The calibrations are: (a) calibrating the object sizes by adjusting the crop size and (b) adjusting statistics before spatial pooling. |
Given the following machine learning model name: Global-and-Local attention, provide a description of the model | Most attention mechanisms learn where to focus using only weak supervisory signals from class labels, which inspired Linsley et al. to investigate how explicit human supervision can affect the performance and interpretability of attention models. As a proof of concept, Linsley et al. proposed the global-and-local attention (GALA) module, which extends an SE block with a spatial attention mechanism.
Given the input feature map $X$, GALA uses an attention mask that combines global and local attention to tell the network where and on what to focus. As in SE blocks, global attention aggregates global information by global average pooling and then produces a channel-wise attention weight vector using a multilayer perceptron. In local attention, two consecutive $1\times 1$ convolutions are conducted on the input to produce a positional weight map. The outputs of the local and global pathways are combined by addition and multiplication. Formally, GALA can be represented as:
\begin{align}
s_g &= W_{2} \delta (W_{1}\text{GAP}(x))
\end{align}
\begin{align}
s_l &= Conv_2^{1\times 1} (\delta(Conv_1^{1\times1}(X)))
\end{align}
\begin{align}
s_g^* &= \text{Expand}(s_g)
\end{align}
\begin{align}
s_l^* &= \text{Expand}(s_l)
\end{align}
\begin{align}
s &= \tanh(a(s_g^\* + s_l^\*) +m \cdot (s_g^\* s_l^\*) )
\end{align}
\begin{align}
Y &= sX
\end{align}
where $a,m \in \mathbb{R}^{C}$ are learnable parameters representing channel-wise weight vectors.
Supervised by human-provided feature importance maps, GALA has significantly improved representational power and can be combined with any CNN backbone. |
Given the following machine learning model name: Factorization machines with cubic splines for numerical features, provide a description of the model | Using cubic splines to improve factorization machine accuracy with numerical features |
Given the following machine learning model name: Deformable DETR, provide a description of the model | **Deformable DETR** is an object detection method that aims mitigates the slow convergence and high complexity issues of [DETR](https://www.paperswithcode.com/method/detr). It combines the best of the sparse spatial sampling of [deformable convolution](https://paperswithcode.com/method/deformable-convolution), and the relation modeling capability of [Transformers](https://paperswithcode.com/methods/category/transformers). Specifically, it introduces a
deformable attention module, which attends to a small set of sampling locations as a pre-filter for prominent key elements out of all the feature map pixels. The module can be naturally extended to aggregating multi-scale features, without the help of [FPN](https://paperswithcode.com/method/fpn). |
Given the following machine learning model name: VL-T5, provide a description of the model | VL-T5 is a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation. The model learns to generate labels in text based on the visual and textual inputs. In contrast to other existing methods, the framework unifies tasks as generating text labels conditioned on multimodal inputs. This allows the model to tackle vision-and-language tasks with unified text generation objective. The models use text prefixes to adapt to different tasks. |
Given the following machine learning model name: ScanSSD, provide a description of the model | **ScanSSD** is a single-shot Detector ([SSD](https://paperswithcode.com/method/ssd)) for locating math formulas offset from text and embedded in textlines. It uses only visual features for detection: no formatting or typesetting information such as layout, font, or character labels are employed. Given a 600 dpi document page image, a Single Shot Detector (SSD) locates formulas at multiple scales using sliding windows, after which candidate detections are pooled to obtain page-level results. |
Given the following machine learning model name: Deep Q-Network, provide a description of the model | A **DQN**, or Deep Q-Network, approximates a state-value function in a [Q-Learning](https://paperswithcode.com/method/q-learning) framework with a neural network. In the Atari Games case, they take in several frames of the game as an input and output state values for each action as an output.
It is usually used in conjunction with [Experience Replay](https://paperswithcode.com/method/experience-replay), for storing the episode steps in memory for off-policy learning, where samples are drawn from the replay memory at random. Additionally, the Q-Network is usually optimized towards a frozen target network that is periodically updated with the latest weights every $k$ steps (where $k$ is a hyperparameter). The latter makes training more stable by preventing short-term oscillations from a moving target. The former tackles autocorrelation that would occur from on-line learning, and having a replay memory makes the problem more like a supervised learning problem.
Image Source: [here](https://www.researchgate.net/publication/319643003_Autonomous_Quadrotor_Landing_using_Deep_Reinforcement_Learning) |
Given the following machine learning model name: Supporting Clustering with Contrastive Learning, provide a description of the model | **SCCL**, or **Supporting Clustering with Contrastive Learning**, is a framework to leverage contrastive learning to promote better separation in unsupervised clustering. It combines the top-down clustering with the bottom-up instance-wise contrastive learning to achieve better inter-cluster distance and intra-cluster distance. During training, we jointly optimize a clustering loss over the original data instances and an instance-wise contrastive loss over the associated augmented pairs. |
Given the following machine learning model name: ClusterFit, provide a description of the model | **ClusterFit** is a self-supervision approach for learning image representations. Given a dataset, we (a) cluster its features extracted from a pre-trained network using k-means and (b) re-train a new network from scratch on this dataset using cluster assignments as pseudo-labels. |
Given the following machine learning model name: Residual GRU, provide a description of the model | A **Residual GRU** is a [gated recurrent unit (GRU)](https://paperswithcode.com/method/gru) that incorporates the idea of residual connections from [ResNets](https://paperswithcode.com/method/resnet). |
Given the following machine learning model name: Normalized Temperature-scaled Cross Entropy Loss, provide a description of the model | **NT-Xent**, or **Normalized Temperature-scaled Cross Entropy Loss**, is a loss function. Let $\text{sim}\left(\mathbf{u}, \mathbf{v}\right) = \mathbf{u}^{T}\mathbf{v}/||\mathbf{u}|| ||\mathbf{v}||$ denote the cosine similarity between two vectors $\mathbf{u}$ and $\mathbf{v}$. Then the loss function for a positive pair of examples $\left(i, j\right)$ is :
$$ \mathbb{l}\_{i,j} = -\log\frac{\exp\left(\text{sim}\left(\mathbf{z}\_{i}, \mathbf{z}\_{j}\right)/\tau\right)}{\sum^{2N}\_{k=1}\mathcal{1}\_{[k\neq{i}]}\exp\left(\text{sim}\left(\mathbf{z}\_{i}, \mathbf{z}\_{k}\right)/\tau\right)}$$
where $\mathcal{1}\_{[k\neq{i}]} \in ${$0, 1$} is an indicator function evaluating to $1$ iff $k\neq{i}$ and $\tau$ denotes a temperature parameter. The final loss is computed across all positive pairs, both $\left(i, j\right)$ and $\left(j, i\right)$, in a mini-batch.
Source: [SimCLR](https://paperswithcode.com/method/simclr) |
Given the following machine learning model name: Neural Attention Fields, provide a description of the model | **NEAT**, or **Neural Attention Fields**, is a feature representation for end-to-end imitation learning models. NEAT is a continuous function which maps locations in Bird's Eye View (BEV) scene coordinates to waypoints and semantics, using intermediate attention maps to iteratively compress high-dimensional 2D image features into a compact representation. This allows the model to selectively attend to relevant regions in the input while ignoring information irrelevant to the driving task, effectively associating the images with the BEV representation. Furthermore, visualizing the attention maps for models with NEAT intermediate representations provides improved interpretability. |
Given the following machine learning model name: Modulated Residual Network, provide a description of the model | **MODERN**, or **Modulated Residual Network**, is an architecture for [visual question answering](https://paperswithcode.com/task/visual-question-answering) (VQA). It employs [conditional batch normalization](https://paperswithcode.com/method/conditional-batch-normalization) to allow a linguistic embedding from an [LSTM](https://paperswithcode.com/method/lstm) to modulate the [batch normalization](https://paperswithcode.com/method/batch-normalization) parameters of a [ResNet](https://paperswithcode.com/method/resnet). This enables the linguistic embedding to manipulate entire feature maps by scaling them up or down, negating them, or shutting them off, etc. |
Given the following machine learning model name: MelGAN, provide a description of the model | **MelGAN** is a non-autoregressive feed-forward convolutional architecture to perform audio waveform generation in a [GAN](https://paperswithcode.com/method/gan) setup. The architecture is a fully convolutional feed-forward network with mel-spectrogram $s$ as input and raw waveform $x$ as output. Since the mel-spectrogram is at
a 256× lower temporal resolution, the authors use a stack of transposed convolutional layers to upsample the input sequence. Each transposed convolutional layer is followed by a stack of residual blocks with dilated convolutions. Unlike traditional GANs, the MelGAN generator does not use a global noise vector as input.
To deal with 'checkerboard artifacts' in audio, instead of using [PhaseShuffle](https://paperswithcode.com/method/phase-shuffle), MelGAN uses kernel-size as a multiple of stride.
[Weight normalization](https://paperswithcode.com/method/weight-normalization) is used for normalization. A [window-based discriminator](https://paperswithcode.com/method/window-based-discriminator), similar to a [PatchGAN](https://paperswithcode.com/method/patchgan) is used for the discriminator. |
Given the following machine learning model name: Self training multi target domain adaptive RetinaNet, provide a description of the model | |
Given the following machine learning model name: Mixture of Logistic Distributions, provide a description of the model | **Mixture of Logistic Distributions (MoL)** is a type of output function, and an alternative to a [softmax](https://paperswithcode.com/method/softmax) layer. Discretized logistic mixture likelihood is used in [PixelCNN](https://paperswithcode.com/method/pixelcnn)++ and [WaveNet](https://paperswithcode.com/method/wavenet) to predict discrete values.
Image Credit: [Hao Gao](https://medium.com/@smallfishbigsea/an-explanation-of-discretized-logistic-mixture-likelihood-bdfe531751f0) |
Given the following machine learning model name: Procrustes, provide a description of the model | Procrustes |
Given the following machine learning model name: PSPNet, provide a description of the model | **PSPNet**, or **Pyramid Scene Parsing Network**, is a semantic segmentation model that utilises a pyramid parsing module that exploits global context information by different-region based context aggregation. The local and global clues together make the final prediction more reliable. We also propose an optimization
Given an input image, PSPNet use a pretrained CNN with the dilated network strategy to extract the feature map. The final feature map size is $1/8$ of the input image. On top of the map, we use the [pyramid pooling module](https://paperswithcode.com/method/pyramid-pooling-module) to gather context information. Using our 4-level pyramid, the pooling kernels cover the whole, half of, and small portions of the image. They are fused as the global prior.
Then we concatenate the prior with the original feature map in the final part of. It is followed by a [convolution](https://paperswithcode.com/method/convolution) layer to generate the final prediction map. |
Given the following machine learning model name: Local Augmentation, provide a description of the model | **Local Augmentation for Graph Neural Networks**, or **LA-GNN**, is a data augmentation technique that enhances node features by its local subgraph structures. Specifically, it learns the conditional distribution of the connected neighbors’ representations given the representation of the central node, which has an analogy with the [Skip-gram of word2vec](https://paperswithcode.com/method/skip-gram-word2vec) model that predicts the probability of the context given the central word. After augmenting the neighborhood, we concat the initial and the generated feature matrix as input for GNNs. |
Given the following machine learning model name: DeepMask, provide a description of the model | **DeepMask** is an object proposal algorithm based on a convolutional neural network. Given an input image patch, DeepMask generates a class-agnostic mask and an associated score which estimates the likelihood of the patch fully containing a centered object (without any notion of an object category). The core of the model is a ConvNet which jointly predicts the mask and the object score. A large part of the network is shared between those two tasks: only the last few network
layers are specialized for separately outputting a mask and score prediction. |
Given the following machine learning model name: Soft Pooling, provide a description of the model | SoftPool: a fast and efficient method that sums exponentially weighted activations. Compared to a range of other pooling methods, SoftPool retains more information in the downsampled activation maps. More refined downsampling leads to better classification accuracy. |
Given the following machine learning model name: Attentive Walk-Aggregating Graph Neural Network, provide a description of the model | We propose to theoretically and empirically examine the effect of incorporating weighting schemes into walk-aggregating GNNs. To this end, we propose a simple, interpretable, and end-to-end supervised GNN model, called AWARE (Attentive Walk-Aggregating GRaph Neural NEtwork), for graph-level prediction. AWARE aggregates the walk information by means of weighting schemes at distinct levels (vertex-, walk-, and graph-level) in a principled manner. By virtue of the incorporated weighting schemes at these different levels, AWARE can emphasize the information important for prediction while diminishing the irrelevant ones—leading to representations that can improve learning performance. |
Given the following machine learning model name: Factor Graph Attention, provide a description of the model | A general multimodal attention unit for any number of modalities. Graphical models inspire it, i.e., it infers several attention beliefs via aggregated interaction messages. |
Given the following machine learning model name: Population Based Augmentation, provide a description of the model | **Population Based Augmentation**, or **PBA**, is a data augmentation strategy (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. In PBA we consider the augmentation policy search problem as a special case of hyperparameter schedule learning. It leverages [Population Based Training](https://paperswithcode.com/method/population-based-training) (PBT), a hyperparameter search algorithm which
optimizes the parameters of a network jointly with their hyperparameters to maximize performance. The output of PBT is not an optimal hyperparameter configuration but rather a trained model and schedule of hyperparameters.
In PBA, we are only interested in the learned schedule and discard the child model result (similar to [AutoAugment](https://paperswithcode.com/method/autoaugment)). This learned augmentation schedule can then be used to improve the training of different (i.e., larger and costlier to train) models on the same dataset.
PBT executes as follows. To start, a fixed population of models are randomly initialized and trained in parallel. At certain intervals, an “exploit-and-explore” procedure is applied to the worse performing population members, where the model clones the weights of a better performing model (i.e., exploitation) and then perturbs the hyperparameters of the cloned model to search in the hyperparameter space (i.e., exploration). Because the weights of the models are cloned and never reinitialized, the total computation required is the computation to train a single model times the population size. |
Given the following machine learning model name: ResNet-RS, provide a description of the model | **ResNet-RS** is a family of [ResNet](https://paperswithcode.com/method/resnet) architectures that are 1.7x faster than [EfficientNets](https://paperswithcode.com/method/efficientnet) on TPUs, while achieving similar accuracies on ImageNet. The authors propose two new scaling strategies: (1) scale model depth in regimes where overfitting can occur (width scaling is preferable otherwise); (2) increase image resolution more slowly than previously recommended.
Additional improvements include the use of a [cosine learning rate schedule](https://paperswithcode.com/method/cosine-annealing), [label smoothing](https://paperswithcode.com/method/label-smoothing), [stochastic depth](https://paperswithcode.com/method/stochastic-depth), [RandAugment](https://paperswithcode.com/method/randaugment), decreased [weight decay](https://paperswithcode.com/method/weight-decay), [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) and the use of the [ResNet-D](https://paperswithcode.com/method/resnet-d) architecture. |
Given the following machine learning model name: AlexNet, provide a description of the model | **AlexNet** is a classic convolutional neural network architecture. It consists of convolutions, [max pooling](https://paperswithcode.com/method/max-pooling) and dense layers as the basic building blocks. Grouped convolutions are used in order to fit the model across two GPUs. |
Given the following machine learning model name: Inverse Square Root Schedule, provide a description of the model | **Inverse Square Root** is a learning rate schedule 1 / $\sqrt{\max\left(n, k\right)}$ where
$n$ is the current training iteration and $k$ is the number of warm-up steps. This sets a constant learning rate for the first $k$ steps, then exponentially decays the learning rate until pre-training is over. |
Given the following machine learning model name: Variational Autoencoder, provide a description of the model | A **Variational Autoencoder** is a type of likelihood-based generative model. It consists of an encoder, that takes in data $x$ as input and transforms this into a latent representation $z$, and a decoder, that takes a latent representation $z$ and returns a reconstruction $\hat{x}$. Inference is performed via variational inference to approximate the posterior of the model. |
Given the following machine learning model name: Temporal Activation Regularization, provide a description of the model | **Temporal Activation Regularization (TAR)** is a type of slowness regularization for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) that penalizes differences between states that have been explored in the past. Formally we minimize:
$$\beta{L\_{2}}\left(h\_{t} - h\_{t+1}\right)$$
where $L\_{2}$ is the $L\_{2}$ norm, $h_{t}$ is the output of the RNN at timestep $t$, and $\beta$ is a scaling coefficient. |
Given the following machine learning model name: Attribute2Font, provide a description of the model | **Attribute2Font** is a model that automatically creates fonts by synthesizing visually pleasing glyph images according to user-specified attributes and their corresponding values. Specifically, Attribute2Font is trained to perform font style transfer between any two fonts conditioned on their attribute values. After training, the model can generate glyph images in accordance with an arbitrary set of font attribute values. A unit named Attribute Attention Module is designed to make those generated glyph images better embody the prominent font attributes. A semi-supervised learning scheme is also introduced to exploit a large number of unlabeled fonts |
Given the following machine learning model name: RelDiff, provide a description of the model | RelDiff generates entity-relation-entity embeddings in a single embedding space. RelDiff adopts two fundamental vector algebraic operators to transform entity and relation embeddings from knowledge graphs into entity-relation-entity embeddings. In particular, RelDiff can encode finer-grained information about the relations than is captured when separate embeddings are learned for the entities and the relations. |
Given the following machine learning model name: End-to-end Adaptive Distributed Training, provide a description of the model | Distributed training has become a pervasive and effective approach for training a large neural network
(NN) model with processing massive data. However, it is very challenging to satisfy requirements
from various NN models, diverse computing resources, and their dynamic changes during a training
job. In this study, we design our distributed training framework in a systematic end-to-end view to
provide the built-in adaptive ability for different scenarios, especially for industrial applications and
production environments, by fully considering resource allocation, model partition, task placement,
and distributed execution. Based on the unified distributed graph and the unified cluster object,
our adaptive framework is equipped with a global cost model and a global planner, which can
enable arbitrary parallelism, resource-aware placement, multi-mode execution, fault-tolerant, and
elastic distributed training. The experiments demonstrate that our framework can satisfy various
requirements from the diversity of applications and the heterogeneity of resources with highly
competitive performance. |
Given the following machine learning model name: LLaMA, provide a description of the model | **LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.
- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.
- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.
- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network. |
Given the following machine learning model name: Models Genesis, provide a description of the model | **Models Genesis**, or **Generic Autodidactic Models**, is a self-supervised approach for learning 3D image representations. The objective of Models Genesis is to learn a common image representation that is transferable and generalizable across diseases, organs, and modalities. It consists of an encoder-decoder architecture with skip connections in between, and is trained to learn a common image representation by restoring the original sub-volume $x\_{i}$ (as ground truth) from the transformed one $\bar{x}\_{i}$ (as input), in which the reconstruction loss (MSE) is computed between the model prediction $x'\_{0}$ and ground truth $x\_{i}$. Once trained, the encoder alone can be fine-tuned for target classification tasks; while the encoder and decoder together can be fine-tuned for target segmentation tasks. |
Given the following machine learning model name: HaloNet, provide a description of the model | A **HaloNet** is a self-attention based model for efficient image classification. It relies on a local self-attention architecture that efficiently maps to existing hardware with haloing. The formulation breaks translational equivariance, but the authors observe that it improves throughput and accuracies over the centered local self-attention used in regular self-attention. The approach also utilises a strided self-attentive downsampling operation for multi-scale feature extraction. |
Given the following machine learning model name: ConvMLP, provide a description of the model | **ConvMLP** is a hierarchical convolutional MLP for visual recognition, which consists of a stage-wise, co-design of [convolution](https://paperswithcode.com/method/convolution) layers, and MLPs. The Conv Stage consists of $C$ convolutional blocks with $1\times 1$ and $3\times 3$ kernel sizes. It is repeated $M$ times before a down convolution is utilized to express a level $L$. The MLP-Conv Stage consists of Channelwise MLPs, with skip layers, and a [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution). This is repeated $M$ times before a down convolution is utilized to express a level $\mathcal{L}$. |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- -