TinyTurbo Module

This module provides an interface in Python to perform Turbo encoding, and decoding using the classical decoders, as well as the TinyTurbo decoder.

Functions

class deepcommpy.tinyturbo.TurboCode(code='lte', block_len=40, interleaver_type='qpp', interleaver_seed=0, puncture=False)
__init__(code='lte', block_len=40, interleaver_type='qpp', interleaver_seed=0, puncture=False)

Turbo Code object. Includes encoding and decoding functions.

Parameters

codestr

Options are ‘lte’ and ‘757’

block_lenint

Length of the block to be encoded.

interleaver_typestr

Options are ‘qpp’ and ‘random’

interleaver_seedint

Seed used to initialize the interleaver.

encode(message_bits, puncture=False)

Turbo Encoder. Encode Bits using a parallel concatenated rate-1/3 turbo code consisting of two rate-1/2 systematic convolutional component codes. Parameters ———- message_bits : 2D torch Tensor containing {0, 1} of shape (batch_size, M)

Stream of bits to be turbo encoded.

puncture: Bool

Currently supports only puncturing pattern ‘110101’

Returns

streamtorch Tensor of turbo encoded codewords, of shape (batch_size, 3*M + 4*memory) where memory is the number of delay elements in the convolutional code, M is the message length.

First 3*M bits are [sys_1, non_sys1_1, non_sys2_1, … . sys_j, non_sys1_j, non_sys2_j, … sys_M, non_sys1_M, non_sys2_M] Next 2*memory bits are termination bits of sys and non_sys1 : [sys_term_1, non_sys1_term_1, … . sys_term_j, non_sys1_term_j, … sys_term_M, non_sys1_term_M] Next 2*memory bits are termination bits of sys_interleaved and non_sys2 : [sys_inter_term_1, non_sys2_term_1, … . sys_inter_term_j, non_sys2_term_j, … sys_inter_term_M, non_sys2_term_M]

Encoded bit streams corresponding to the systematic output and the two non-systematic outputs from the two component codes.

tinyturbo_decode(received_llrs, number_iterations, tinyturbo=None, L_int=None, method='max_log_MAP', puncture=False)

Turbo Decoder. Decode a Turbo code using TinyTurbo weights.

Parameters

received_llrsLLRs of shape (batch_size, 3*M + 4*memory)

Received LLRs corresponding to the received Turbo encoded bits

number_iterations: Int

Number of iterations of BCJR algorithm

tinyturboinstance of decoder class

Contains normal and interleaved weights for TinyTurbo Defaults to weights from TinyTurbo paper

L_intintrinsic LLRs of shape (batch_size, 3*M + 4*memory)

Intrinsic LLRs (prior). (Set to zeros if no prior)

methodTurbo decoding method

max-log-MAP or MAP

puncture: Bool

Currently supports only puncturing pattern ‘110101’

Returns

L_ext : torch Tensor of decoded LLRs, of shape (batch_size, M + memory)

decoded_bits: L_ext > 0

Decoded beliefs

turbo_decode(received_llrs, number_iterations, L_int=None, method='max_log_MAP', puncture=False)

Turbo Decoder. Decode a Turbo code.

Parameters

received_llrsLLRs of shape (batch_size, 3*M + 4*memory)

Received LLRs corresponding to the received Turbo encoded bits

number_iterations: Int

Number of iterations of BCJR algorithm

L_intintrinsic LLRs of shape (batch_size, 3*M + 4*memory)

Intrinsic LLRs (prior). (Set to zeros if no prior)

methodTurbo decoding method

max-log-MAP or MAP

puncture: Bool

Currently supports only puncturing pattern ‘110101’

Returns

L_ext : torch Tensor of decoded LLRs, of shape (batch_size, M + memory)

decoded_bits: L_ext > 0

Decoded beliefs

class deepcommpy.tinyturbo.TinyTurbo(block_len, num_iter, init_type='ones', type='scale')
__init__(block_len, num_iter, init_type='ones', type='scale')

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(*input: Any) None

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

deepcommpy.tinyturbo.train_tinyturbo(turbocode, device, config=None, loaded_weights=None)

Training function TinyTurbo training : Use config[‘target] = ‘scale’ (default).

If config[‘target’] == ‘LLR’, then training proceeds like Turbonet+ (Y. He, J. Zhang, S. Jin, C.-K. Wen, and G. Y. Li, “Model-driven dnn decoder for turbo codes: Design, simulation, and experimental results,” IEEE Transactions on Communications, vol. 68, no. 10, pp. 6127–6140)

Parameters

turbocodeTurboCode

Turbo code object.

devicetorch.device

Device to use for computations. Eg: torch.device(‘cuda:0’) or torch.device(‘cpu’)

configdict, optional

Configuration dictionary. Example config provided as deepcommpy/tinyturbo/train_config.json.

loaded_weightsdict, optional

Dictionary of weights to load into the model.

Returns

tinyturboTinyTurbo

Trained TinyTurbo model.

training_losseslist

List of training losses.

training_berslist

List of training bit error rates.

stepint

Number of training steps.

deepcommpy.tinyturbo.test_tinyturbo(turbocode, device, tinyturbo=None, config=None)

Test TinyTurbo on a test set

Parameters

turbocodeTurboCode

Turbo code object

devicetorch.device

Device to use for training Eg. torch.device(‘cuda:0’) or torch.device(‘cpu’)

tinyturboTinyTurbo, optional

If None, default TinyTurbo is used from paper)

configdict, optional

If None, default config is used from test_config.json