GNN

Graph Neural Network.

Classifier

The attribute labels_ assigns a label to each node of the graph.

class sknetwork.gnn.GNNClassifier(dims: Optional[Union[int, list]] = None, layer_types: Union[str, list] = 'Conv', activations: Union[str, list] = 'ReLu', use_bias: Union[bool, list] = True, normalizations: Union[str, list] = 'both', self_embeddings: Union[bool, list] = True, sample_sizes: Union[int, list] = 25, loss: Union[sknetwork.gnn.base_activation.BaseLoss, str] = 'CrossEntropy', layers: Optional[list] = None, optimizer: Union[sknetwork.gnn.optimizer.BaseOptimizer, str] = 'Adam', learning_rate: float = 0.01, early_stopping: bool = True, patience: int = 10, verbose: bool = False)[source]

Graph Neural Network for node classification.

Parameters
  • dims (list or int) – Dimensions of the output of each layer (in forward direction). If an integer, dimension of the output layer (no hidden layer). Optional if layers is specified.

  • layer_types (list or str) – Layer types (in forward direction). If a string, use the same type of layer for all layers. Can be 'Conv', graph convolutional layer (default) or 'Sage' (GraphSage).

  • activations (list or str) – Activation functions (in forward direction). If a string, use the same activation function for all layers. Can be either 'Identity', 'Relu', 'Sigmoid' or 'Softmax' (default = 'Relu').

  • use_bias (list or bool) – Whether to use a bias term at each layer. If True, use a bias term at all layers.

  • normalizations (list or str) – Normalization of the adjacency matrix for message passing. If a string, use the same normalization for all layers. Can be either ‘left’` (left normalization by the degrees), 'right' (right normalization by the degrees), 'both' (symmetric normalization by the square root of degrees, default) or None (no normalization).

  • self_embeddings (list or str) – Whether to add a self embeddings to each node of the graph for message passing. If True, add self-embeddings at all layers.

  • sample_sizes (list or int) – Size of neighborhood sampled for each node. Used only for 'Sage' layer type.

  • loss (str (default = 'CrossEntropy') or BaseLoss) – Loss function name or custom loss.

  • layers (list or None) – Custom layers. If used, previous parameters are ignored.

  • optimizer (str or optimizer) –

    • 'Adam', stochastic gradient-based optimizer (default).

    • 'GD', gradient descent.

  • learning_rate (float) – Learning rate.

  • early_stopping (bool (default = True)) – Whether to use early stopping to end training. If True, training terminates when validation score is not improving for patience number of epochs.

  • patience (int (default = 10)) – Number of iterations with no improvement to wait before stopping fitting.

  • verbose (bool) – Verbose mode.

Variables
  • conv1 (conv2, ...,) – Graph convolutional layers.

  • output_ (array) – Output of the GNN.

  • labels_ (np.ndarray) – Predicted node labels.

  • history_ (dict) – Training history per epoch: {'embedding', 'loss', 'train_accuracy', 'val_accuracy'}.

Example

>>> from sknetwork.gnn.gnn_classifier import GNNClassifier
>>> from sknetwork.data import karate_club
>>> from numpy.random import randint
>>> graph = karate_club(metadata=True)
>>> adjacency = graph.adjacency
>>> labels_true = graph.labels
>>> labels = {i: labels_true[i] for i in [0, 1, 33]}
>>> features = adjacency.copy()
>>> gnn = GNNClassifier(dims=1, early_stopping=False)
>>> labels_pred = gnn.fit_predict(adjacency, features, labels, random_state=42)
>>> np.round(np.mean(labels_pred == labels_true), 2)
0.88
backward(features: scipy.sparse._csr.csr_matrix, labels: numpy.ndarray, mask: numpy.ndarray)

Compute backpropagation.

Parameters
  • features (sparse.csr_matrix) – Features, array of shape (n_nodes, n_features).

  • labels (np.ndarray) – Labels, array of shape (n_nodes,).

  • mask (np.ndarray) – Boolean mask, array of shape (n_nodes,).

fit(adjacency: Union[scipy.sparse._csr.csr_matrix, numpy.ndarray], features: Union[scipy.sparse._csr.csr_matrix, numpy.ndarray], labels: numpy.ndarray, n_epochs: int = 100, validation: float = 0, reinit: bool = False, random_state: Optional[int] = None, history: bool = False) sknetwork.gnn.gnn_classifier.GNNClassifier[source]

Fit model to data and store trained parameters.

Parameters
  • adjacency (sparse.csr_matrix) – Adjacency matrix of the graph.

  • features (sparse.csr_matrix, np.ndarray) – Input feature of shape \((n, d)\) with \(n\) the number of nodes in the graph and \(d\) the size of feature space.

  • labels – Known labels (dictionary or vector of int). Negative values ignored.

  • n_epochs (int (default = 100)) – Number of epochs (iterations over the whole graph).

  • validation (float) – Proportion of the training set used for validation (between 0 and 1).

  • reinit (bool (default = False)) – If True, reinit the trainable parameters of the GNN (weights and biases).

  • random_state (int) – Random seed, used for reproducible results across multiple runs.

  • history (bool (default = False)) – If True, save training history.

fit_predict(*args, **kwargs) numpy.ndarray

Fit algorithm to the data and return the labels. Same parameters as the fit method.

Returns

labels – Labels of the nodes.

Return type

np.ndarray

fit_predict_proba(*args, **kwargs) numpy.ndarray

Fit algorithm to the data and return the distribution over labels. Same parameters as the fit method.

Returns

probs – Probability distribution over labels.

Return type

np.ndarray

fit_transform(*args, **kwargs) numpy.ndarray

Fit algorithm to the data and return the embedding of the nodes. Same parameters as the fit method.

Returns

embedding – Embedding of the nodes.

Return type

np.ndarray

forward(adjacency: Union[list, scipy.sparse._csr.csr_matrix], features: Union[scipy.sparse._csr.csr_matrix, numpy.ndarray]) numpy.ndarray[source]

Perform a forward pass on the graph and return the output.

Parameters
  • adjacency (Union[list, sparse.csr_matrix]) – Adjacency matrix or list of sampled adjacency matrices.

  • features (sparse.csr_matrix, np.ndarray) – Features, array of shape (n_nodes, n_features).

Returns

output – Output of the GNN.

Return type

np.ndarray

get_params()

Get parameters as dictionary.

Returns

params – Parameters of the algorithm.

Return type

dict

predict(adjacency_vectors: Optional[Union[scipy.sparse._csr.csr_matrix, numpy.ndarray]] = None, feature_vectors: Optional[Union[scipy.sparse._csr.csr_matrix, numpy.ndarray]] = None) numpy.ndarray[source]

Predict labels for new nodes. If called without parameters, labels are returned for all nodes.

Parameters
  • adjacency_vectors (np.ndarray) – Square adjacency matrix. Array of shape (n, n).

  • feature_vectors (np.ndarray) – Features row vectors. Array of shape (n, n_feat). The number of features n_feat must match with the one used during training.

Returns

labels – Label of each node of the graph.

Return type

np.ndarray

predict_proba()

Return the probability distribution over labels.

print_log(*args)

Fill log with text.

set_params(params: dict) sknetwork.base.Algorithm

Set parameters of the algorithm.

Parameters

params (dict) – Parameters of the algorithm.

Returns

self

Return type

Algorithm

transform()

Return the embedding of nodes.

Convolution layers

class sknetwork.gnn.Convolution(layer_type: str, out_channels: int, activation: Optional[Union[sknetwork.gnn.base_activation.BaseActivation, str]] = 'Relu', use_bias: bool = True, normalization: str = 'both', self_embeddings: bool = True, sample_size: Optional[int] = None, loss: Optional[Union[sknetwork.gnn.base_activation.BaseLoss, str]] = None)[source]

Graph convolutional layer.

Apply the following function to the embedding \(X\):

\(\sigma(\bar AXW + b)\),

where \(\bar A\) is the normalized adjacency matrix (possibly with inserted self-embeddings), \(W\), \(b\) are trainable parameters and \(\sigma\) is the activation function.

Parameters
  • layer_type (str) – Layer type. Can be either 'Conv', convolutional operator as in [1] or 'Sage', as in [2].

  • out_channels (int) – Dimension of the output.

  • activation (str (default = 'Relu') or custom activation.) – Activation function. If a string, can be either 'Identity', 'Relu', 'Sigmoid' or 'Softmax'.

  • use_bias (bool (default = True)) – If True, add a bias vector.

  • normalization (str (default = 'both')) – Normalization of the adjacency matrix for message passing. Can be either ‘left’` (left normalization by the degrees), 'right' (right normalization by the degrees), 'both' (symmetric normalization by the square root of degrees, default) or None (no normalization).

  • self_embeddings (bool (default = True)) – If True, consider self-embedding in addition to neighbors embedding for each node of the graph.

  • sample_size (int (default = 25)) – Size of neighborhood sampled for each node. Used only for 'Sage' layer.

Variables
  • weight (np.ndarray,) – Trainable weight matrix.

  • bias (np.ndarray) – Bias vector.

  • embedding (np.ndarray) – Embedding of the nodes (before activation).

  • output (np.ndarray) – Output of the layer (after activation).

References

[1] Kipf, T., & Welling, M. (2017). Semi-supervised Classification with Graph Convolutional Networks. 5th International Conference on Learning Representations.

[2] Hamilton, W. Ying, R., & Leskovec, J. (2017) Inductive Representation Learning on Large Graphs. NIPS

forward(adjacency: Union[scipy.sparse._csr.csr_matrix, numpy.ndarray], features: Union[scipy.sparse._csr.csr_matrix, numpy.ndarray]) numpy.ndarray[source]

Compute graph convolution.

Parameters
  • adjacency – Adjacency matrix of the graph.

  • features (sparse.csr_matrix, np.ndarray) – Input feature of shape \((n, d)\) with \(n\) the number of nodes in the graph and \(d\) the size of feature space.

Returns

output – Output of the layer.

Return type

np.ndarray

Activation functions

class sknetwork.gnn.BaseActivation(name: str = 'custom')[source]

Base class for activation functions. :param name: Name of the activation function. :type name: str

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray[source]

Gradient of the activation function.

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal.

  • direction (np.ndarray, shape (n_samples, n_channels)) – Direction where the gradient is taken.

Returns

gradient – Gradient.

Return type

np.ndarray, shape (n_samples, n_channels)

static output(signal: numpy.ndarray) numpy.ndarray[source]

Output of the activation function.

Parameters

signal (np.ndarray, shape (n_samples, n_channels)) – Input signal.

Returns

output – Output signal.

Return type

np.ndarray, shape (n_samples, n_channels)

class sknetwork.gnn.ReLu[source]

ReLu (Rectified Linear Unit) activation function:

\(\sigma(x) = \max(0, x)\)

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray[source]

Gradient of the ReLu function.

static output(signal: numpy.ndarray) numpy.ndarray[source]

Output of the ReLu function.

class sknetwork.gnn.Sigmoid[source]

Sigmoid activation function:

\(\sigma(x) = \frac{1}{1+e^{-x}}\) Also known as the logistic function.

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray[source]

Gradient of the sigmoid function.

static output(signal: numpy.ndarray) numpy.ndarray[source]

Output of the sigmoid function.

class sknetwork.gnn.Softmax[source]

Softmax activation function:

\(\sigma(x) = (\frac{e^{x_1}}{\sum_{i=1}^N e^{x_i})},\ldots,\frac{e^{x_N}}{\sum_{i=1}^N e^{x_i})})\)

where \(N\) is the number of channels.

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray[source]

Gradient of the softmax function.

static output(signal: numpy.ndarray) numpy.ndarray[source]

Output of the softmax function (rows sum to 1).

Loss functions

class sknetwork.gnn.BaseLoss(name: str = 'custom')[source]

Base class for loss functions.

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray

Gradient of the activation function.

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal.

  • direction (np.ndarray, shape (n_samples, n_channels)) – Direction where the gradient is taken.

Returns

gradient – Gradient.

Return type

np.ndarray, shape (n_samples, n_channels)

static loss(signal: numpy.ndarray, labels: numpy.ndarray) float[source]

Get the loss value.

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal (before activation).

  • labels (np.ndarray, shape (n_samples)) – True labels.

static loss_gradient(signal: numpy.ndarray, labels: numpy.ndarray) numpy.ndarray[source]

Gradient of the loss function.

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal.

  • labels (np.ndarray, shape (n_samples,)) – True labels.

Returns

gradient – Gradient.

Return type

np.ndarray, shape (n_samples, n_channels)

static output(signal: numpy.ndarray) numpy.ndarray

Output of the activation function.

Parameters

signal (np.ndarray, shape (n_samples, n_channels)) – Input signal.

Returns

output – Output signal.

Return type

np.ndarray, shape (n_samples, n_channels)

class sknetwork.gnn.CrossEntropy[source]

Cross entropy loss with softmax activation.

For a single sample with value \(x\) and true label \(y\), the cross-entropy loss is:

\(-\sum_i 1_{\{y=i\}} \log (p_i)\)

with

\(p_i = e^{x_i} / \sum_j e^{x_j}\).

For \(n\) samples, return the average loss.

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray

Gradient of the softmax function.

static loss(signal: numpy.ndarray, labels: numpy.ndarray) float[source]

Get loss value.

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal (before activation). The number of channels must be at least 2.

  • labels (np.ndarray, shape (n_samples)) – True labels.

Returns

value – Loss value.

Return type

float

static loss_gradient(signal: numpy.ndarray, labels: numpy.ndarray) numpy.ndarray[source]

Get the gradient of the loss function (including activation).

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal (before activation).

  • labels (np.ndarray, shape (n_samples)) – True labels.

Returns

gradient – Gradient of the loss function.

Return type

float

static output(signal: numpy.ndarray) numpy.ndarray

Output of the softmax function (rows sum to 1).

class sknetwork.gnn.BinaryCrossEntropy[source]

Binary cross entropy loss with sigmoid activation.

For a single sample with true label \(y\) and predicted probability \(p\), the binary cross-entropy loss is:

\(-y \log (p) - (1-y) \log (1 - p).\)

For \(n\) samples, return the average loss.

static gradient(signal: numpy.ndarray, direction: numpy.ndarray) numpy.ndarray

Gradient of the sigmoid function.

static loss(signal: numpy.ndarray, labels: numpy.ndarray) float[source]

Get loss value.

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal (before activation). The number of channels must be at least 2.

  • labels (np.ndarray, shape (n_samples)) – True labels.

Returns

value – Loss value.

Return type

float

static loss_gradient(signal: numpy.ndarray, labels: numpy.ndarray) numpy.ndarray[source]

Get the gradient of the loss function (including activation).

Parameters
  • signal (np.ndarray, shape (n_samples, n_channels)) – Input signal (before activation).

  • labels (np.ndarray, shape (n_samples)) – True labels.

Returns

gradient – Gradient of the loss function.

Return type

float

static output(signal: numpy.ndarray) numpy.ndarray

Output of the sigmoid function.

Optimizers

class sknetwork.gnn.BaseOptimizer(learning_rate)[source]

Base class for optimizers.

Parameters

learning_rate (float (default = 0.01)) – Learning rate for updating weights.

step(gnn: BaseGNN)[source]

Update model parameters according to gradient values.

Parameters

gnn (BaseGNNClassifier) – Model containing parameters to update.

class sknetwork.gnn.ADAM(learning_rate: float = 0.01, beta1: float = 0.9, beta2: float = 0.999, eps: float = 1e-08)[source]

Adam optimizer.

Parameters
  • learning_rate (float (default = 0.01)) – Learning rate for updating weights.

  • beta1 (float) – Coefficients used for computing running averages of gradients.

  • beta2 (float) – Coefficients used for computing running averages of gradients.

  • eps (float (default = 1e-8)) – Term added to the denominator to improve stability.

References

Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. 3rd International Conference for Learning Representation.

step(gnn: BaseGNN)[source]

Update model parameters according to gradient values and parameters.

Parameters

gnn (BaseGNNClassifier) – Model containing parameters to update.

class sknetwork.gnn.GD(learning_rate: float = 0.01)[source]

Gradient Descent optimizer.

Parameters

learning_rate (float (default = 0.01)) – Learning rate for updating weights.

step(gnn: BaseGNN)[source]

Update model parameters according to gradient values.

Parameters

gnn (BaseGNNClassifier) – Model containing parameters to update.