I try to create a model by concatenating 2 models together.

The models I want to use shall handle time series and I’m experimenting with Conv1D layers.

As these have an 3D input shape batch_shape + (steps, input_dim) and the Keras TimeseriesGenerator is providing such, I’m happy being able to make use of it when handling single head models.

```
import pandas as pd
import numpy as np
import random
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import (Input, Dense, Conv1D, BatchNormalization,
Flatten, Dropout, MaxPooling1D,
concatenate)
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
from tensorflow.keras.utils import plot_model
data = pd.DataFrame(index=pd.date_range(start='2020-01-01', periods=300, freq='D'))
data['featureA'] = [random.random() for _ in range(len(data))]
data['featureB'] = [random.random() for _ in range(len(data))]
data['featureC'] = [random.random() for _ in range(len(data))]
data['featureD'] = [random.random() for _ in range(len(data))]
data['target'] = [random.random() for _ in range(len(data))]
Xtrain_AB, Xtest_AB, yTrain_AB, yTest_AB = train_test_split(data[['featureA', 'featureB']],
data['target'], test_size=0.2,
shuffle=False)
Xtrain_CD, Xtest_CD, yTrain_CD, yTest_CD = train_test_split(data[['featureC', 'featureD']],
data['target'], test_size=0.2,
shuffle=False)
n_steps = 5
train_gen_AB = TimeseriesGenerator(Xtrain_AB, yTrain_AB,
length=n_steps,
sampling_rate=1,
batch_size=64,
shuffle=False)
test_gen_AB = TimeseriesGenerator(Xtest_AB, yTest_AB,
length=n_steps,
sampling_rate=1,
batch_size=64,
shuffle=False)
n_features_AB = len(Xtrain_AB.columns)
input_AB = Input(shape=(n_steps, n_features_AB))
layer_AB = Conv1D(filters=128, kernel_size=3, activation='relu', input_shape=(n_steps, n_features_AB))(input_AB)
layer_AB = MaxPooling1D(pool_size=2)(layer_AB)
layer_AB = Flatten()(layer_AB)
dense_AB = Dense(50, activation='relu')(layer_AB)
output_AB = Dense(1)(dense_AB)
model_AB = Model(inputs=input_AB, outputs=output_AB)
model_AB.compile(optimizer='adam', loss='mse')
model_AB.summary()
model_AB.fit(train_gen_AB, epochs=1, verbose=1)
print(f'evaluation: {model_AB.evaluate(test_gen_AB)}')
#plot_model(model_AB)
train_gen_CD = TimeseriesGenerator(Xtrain_CD, yTrain_CD,
length=n_steps,
sampling_rate=1,
batch_size=64,
shuffle=False)
test_gen_CD = TimeseriesGenerator(Xtest_CD, yTest_CD,
length=n_steps,
sampling_rate=1,
batch_size=64,
shuffle=False)
n_features_CD = len(Xtrain_CD.columns)
input_CD = Input(shape=(n_steps, n_features_CD))
layer_CD = Conv1D(filters=128, kernel_size=3, activation='relu', input_shape=(n_steps, n_features_CD))(input_CD)
layer_CD = MaxPooling1D(pool_size=2)(layer_CD)
layer_CD = Flatten()(layer_CD)
dense_CD = Dense(50, activation='relu')(layer_CD)
output_CD = Dense(1)(dense_CD)
model_CD = Model(inputs=input_CD, outputs=output_CD)
model_CD.compile(optimizer='adam', loss='mse')
model_CD.summary()
model_CD.fit(train_gen_CD, epochs=1, verbose=1)
print(f'evaluation: {model_CD.evaluate(test_gen_CD)}')
#plot_model(model_CD)
```

This works fine for each of the models:)

Now I would like to experiment with concatenating both models to one (as I think it might enable me later when adding additional ‘heads’ to train them in parallel, and I guess using such models might be easier as handling a lot of separated once) and get a dual-head model, which can easily be created like this

```
merge=concatenate(inputs=[layer_AB, layer_CD])
dense_merge = Dense(50, activation='relu')(merge)
output_merge = Dense(1)(dense_merge)
model_dual_head = Model(inputs=[input_AB, input_CD], outputs=output_merge)
model_dual_head.compile(optimizer='adam', loss='mse')
model_dual_head.summary()
print(f'dual head model input_shape:{model_dual_head.input_shape}')
plot_model(model_dual_head)
```

This dual_head_model has an input_shape of 2 time 3D

`[(None, 5, 2), (None, 5, 2)]`

and looks finally this way

Unfortunately I don’t know how to fit it 🙁 and hope you will be able to provide me a solution on how to generate the needed shape of data.

I tried to provide the previously used generators as list

`model_dual_head.fit([train_gen_AB, train_gen_CD], epochs=1, verbose=1)`

, and also lists of the raw input data frames `model_dual_head.fit(x=[Xtrain_AB, Xtrain_CD], y=[yTrain_AB, yTrain_CD], epochs=1, verbose=1)`

, but it seems not to be in the right shape.

Thanks in advance

Wasili

Source: Python Questions