Event Detection With Time Series Data Using Python¶
Classification models with time series data having target variable to predict medical events from data got from EHR. Timesia, eventdetector_ts are packages used to analyze. In this post, I use SLTM models, decision tree models to predict events.
Load data¶
In [ ]:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
Data source:
In [ ]:
# dataset = pd.read_csv("https://ia802604.us.archive.org/9/items/credit_card_fraud_dataset/credit_card_fraud_dataset.csv")
# events = pd.read_csv("https://ia902700.us.archive.org/21/items/credit_card_fraud_events/credit_card_fraud_events.csv")
I load data downloaded into my laptop local directory.
In [ ]:
dataset = pd.read_csv("/Users/nnthieu/Downloads/Data/credit_card_fraud_dataset.csv")
events = pd.read_csv("/Users/nnthieu/Downloads/Data/credit_card_fraud_events.csv")
dataset.rename(columns={'Unnamed: 0': 'Date'}, inplace=True)
In [ ]:
dataset['Date'] = pd.to_datetime(dataset['Date'])
dataset = dataset.set_index('Date')
events['events'] = pd.to_datetime(events['events'])
In [ ]:
print(dataset.head(2))
print(events.head(2))
V1 V2 V3 V4 V5 \ Date 1970-01-01 00:00:00 -1.359807 -0.072781 2.536347 1.378155 -0.338321 1970-01-01 00:00:01 1.191857 0.266151 0.166480 0.448154 0.060018 V6 V7 V8 V9 V10 ... \ Date ... 1970-01-01 00:00:00 0.462388 0.239599 0.098698 0.363787 0.090794 ... 1970-01-01 00:00:01 -0.082361 -0.078803 0.085102 -0.255425 -0.166974 ... V20 V21 V22 V23 V24 \ Date 1970-01-01 00:00:00 0.251412 -0.018307 0.277838 -0.110474 0.066928 1970-01-01 00:00:01 -0.069083 -0.225775 -0.638672 0.101288 -0.339846 V25 V26 V27 V28 Amount Date 1970-01-01 00:00:00 0.128539 -0.189115 0.133558 -0.021053 149.62 1970-01-01 00:00:01 0.167170 0.125895 -0.008983 0.014724 2.69 [2 rows x 29 columns] events 0 1970-01-01 00:09:01 1 1970-01-01 00:10:23
In [ ]:
events.rename(columns={'events': 'Date'}, inplace=True)
events['Label'] = 1
print(events.head())
print(events.shape)
Date Label 0 1970-01-01 00:09:01 1 1 1970-01-01 00:10:23 1 2 1970-01-01 01:22:00 1 3 1970-01-01 01:41:48 1 4 1970-01-01 01:45:29 1 (492, 2)
In [ ]:
merged_df = pd.merge(dataset, events, on='Date', how='left')
merged_df['Label']=merged_df['Label'].fillna(0)
print(merged_df.head())
print(merged_df['Label'].unique())
Date V1 V2 V3 V4 V5 \ 0 1970-01-01 00:00:00 -1.359807 -0.072781 2.536347 1.378155 -0.338321 1 1970-01-01 00:00:01 1.191857 0.266151 0.166480 0.448154 0.060018 2 1970-01-01 00:00:02 -1.358354 -1.340163 1.773209 0.379780 -0.503198 3 1970-01-01 00:00:03 -0.966272 -0.185226 1.792993 -0.863291 -0.010309 4 1970-01-01 00:00:04 -1.158233 0.877737 1.548718 0.403034 -0.407193 V6 V7 V8 V9 ... V21 V22 V23 \ 0 0.462388 0.239599 0.098698 0.363787 ... -0.018307 0.277838 -0.110474 1 -0.082361 -0.078803 0.085102 -0.255425 ... -0.225775 -0.638672 0.101288 2 1.800499 0.791461 0.247676 -1.514654 ... 0.247998 0.771679 0.909412 3 1.247203 0.237609 0.377436 -1.387024 ... -0.108300 0.005274 -0.190321 4 0.095921 0.592941 -0.270533 0.817739 ... -0.009431 0.798278 -0.137458 V24 V25 V26 V27 V28 Amount Label 0 0.066928 0.128539 -0.189115 0.133558 -0.021053 149.62 0.0 1 -0.339846 0.167170 0.125895 -0.008983 0.014724 2.69 0.0 2 -0.689281 -0.327642 -0.139097 -0.055353 -0.059752 378.66 0.0 3 -1.175575 0.647376 -0.221929 0.062723 0.061458 123.50 0.0 4 0.141267 -0.206010 0.502292 0.219422 0.215153 69.99 0.0 [5 rows x 31 columns] [0. 1.]
In [ ]:
sns.lineplot(data=dataset, x='Date', y='Amount', errorbar=None)
/Users/anaconda3/lib/python3.11/site-packages/seaborn/_oldcore.py:1119: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead. with pd.option_context('mode.use_inf_as_na', True): /Users/anaconda3/lib/python3.11/site-packages/seaborn/_oldcore.py:1119: FutureWarning: use_inf_as_na option is deprecated and will be removed in a future version. Convert inf values to NaN before operating instead. with pd.option_context('mode.use_inf_as_na', True):
Out[ ]:
<Axes: xlabel='Date', ylabel='Amount'>
Building classification models¶
Decision tree model¶
Spliting data into train and test sub-datasets weighted by 'Label' because 'Label' percentage is small of 'Label' = 1 compared to 'Label' = 0
In [ ]:
from sklearn.model_selection import train_test_split
x = merged_df.drop(['Label','Date'], axis =1)
y = merged_df['Label']
# Define the weights for each class
weight_for_class1 = 1.0 # Weight for class 1
weight_for_class0 = 2.0 # Weight for class 0
# Calculate weights based on the Label variable
weights = y.map({0: weight_for_class0, 1: weight_for_class1})
# Split the data into train and test sets
x_train, x_test, y_train, y_test, weights_train, weights_test = train_test_split(
x, y, weights, test_size=0.3, stratify=y, random_state=42)
In [ ]:
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, classification_report
model = DecisionTreeClassifier(random_state=42)
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
accuracy = accuracy_score(y_test, y_pred)
report = classification_report(y_test, y_pred)
print(f"Accuracy: {accuracy:.2f}")
print("Classification Report:")
print(report)
Accuracy: 1.00 Classification Report: precision recall f1-score support 0.0 1.00 1.00 1.00 85295 1.0 0.81 0.74 0.77 148 accuracy 1.00 85443 macro avg 0.90 0.87 0.88 85443 weighted avg 1.00 1.00 1.00 85443
XGB model¶
In [ ]:
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score, classification_report
model = XGBClassifier(random_state=42)
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
accuracy = accuracy_score(y_test, y_pred)
report = classification_report(y_test, y_pred)
print(f"Accuracy: {accuracy:.2f}")
print("Classification Report:")
print(report)
Accuracy: 1.00 Classification Report: precision recall f1-score support 0.0 1.00 1.00 1.00 85295 1.0 0.93 0.76 0.84 148 accuracy 1.00 85443 macro avg 0.97 0.88 0.92 85443 weighted avg 1.00 1.00 1.00 85443
Eventdetector-ts model¶
This model run with errors not yet comeovered, but I place here to come back correct later.
In [ ]:
import tensorflow.keras as keras
import eventdetector_ts
from eventdetector_ts import FFN
from eventdetector_ts.metamodel.meta_model import MetaModel
meta_model = MetaModel(dataset = dataset, events = events, width=2, step=1,
output_dir='ser', batch_size=3200, s_h=0.01, models=[(FFN, 1)],
hyperparams_ffn=(1, 1, 20, 20, "sigmoid")
)
meta_model.fit()
2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: checks if the index of the dataset is already in the datetime format. 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: Computing the time sampling and time unit of the dataset 2024-05-09 06:12:02 [WARNING] eventdetector_ts.metamodel: The time sampling t_s is 1 seconds 5 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: { 'batch_size': 3200, 'delta': 1, 'dropout': 0.3, 'epochs': 256, 'epsilon': 0.0002, 'fill_nan': 'zeros', 'hyperparams_cnn': (16, 64, 3, 8, 1, 2, 'relu'), 'hyperparams_ffn': (1, 1, 20, 20, 'sigmoid'), 'hyperparams_mm_network': (1, 32, 'sigmoid'), 'hyperparams_rnn': (1, 2, 16, 128, 'tanh'), 'hyperparams_transformer': (256, 4, 1, True, 'relu'), 'last_act_func': 'sigmoid', 'models': [('FFN', 1)], 'pa': 5, 'remove_overlapping_events': True, 's_h': 0.01, 'save_models_as_dot_format': False, 'scaler': 'StandardScaler', 't_max': 1.5, 't_r': 0.97, 'test_size': 0.2, 'time_window': None, 'type_training': 'average', 'use_kfold': False, 'val_size': 0.2, 'width_events_s': 1} 2024-05-09 06:12:02 [WARNING] eventdetector_ts.metamodel: The working directory '/Users/nnthieu/ser' exists and it will be deleted 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: Creating the working directory at: '/Users/nnthieu/ser' 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: Computes the middle date of events... 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: Removes events that occur too close together... 2024-05-09 06:12:02 [WARNING] eventdetector_ts.metamodel: A total of 31/492 events were removed due to overlapping 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: Convert events to intervals... 2024-05-09 06:12:02 [INFO] eventdetector_ts.metamodel: Computing overlapping partitions...
/Users/anaconda3/lib/python3.11/site-packages/eventdetector_ts/data/helpers_data.py:113: UserWarning: Discarding nonzero nanoseconds in conversion. dt = date.to_pydatetime()
2024-05-09 06:12:03 [INFO] eventdetector_ts.metamodel: Computing op... 2024-05-09 06:12:07 [INFO] eventdetector_ts.metamodel: Create the following models: ['FFN'] 2024-05-09 06:12:07 [INFO] eventdetector_ts.metamodel: Split the data into training, validation, and test sets and apply the specified scaler to each time step... 2024-05-09 06:12:07 [INFO] eventdetector_ts.metamodel: Saves the scalers to disk... Saving scaling...2/2 2024-05-09 06:12:07 [INFO] eventdetector_ts.metamodel: Fits the created models to the training data... 2024-05-09 06:12:07 [INFO] eventdetector_ts.models: Summary of FFN_0...
Model: "FFN_0"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input (InputLayer) │ (None, 2, 29) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_8 (Flatten) │ (None, 58) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_16 (Dense) │ (None, 20) │ 1,180 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_8 (Dropout) │ (None, 20) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_17 (Dense) │ (None, 1) │ 21 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 1,201 (4.69 KB)
Trainable params: 1,201 (4.69 KB)
Non-trainable params: 0 (0.00 B)
2024-05-09 06:12:07 [INFO] eventdetector_ts.models: None 2024-05-09 06:12:07 [INFO] eventdetector_ts.models: Fitting of FFN_0... Epoch 1/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 0.3849 - val_loss: 0.2734 Epoch 2/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2062 - val_loss: 0.1784 Epoch 3/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1114 - val_loss: 0.1079 Epoch 4/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0664 - val_loss: 0.0653 Epoch 5/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0436 - val_loss: 0.0416 Epoch 6/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0307 - val_loss: 0.0280 Epoch 7/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0229 - val_loss: 0.0199 Epoch 8/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0179 - val_loss: 0.0147 Epoch 9/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0144 - val_loss: 0.0112 Epoch 10/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0121 - val_loss: 0.0087 Epoch 11/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0101 - val_loss: 0.0070 Epoch 12/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0089 - val_loss: 0.0057 Epoch 13/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0078 - val_loss: 0.0047 Epoch 14/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0067 - val_loss: 0.0040 Epoch 15/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0064 - val_loss: 0.0034 Epoch 16/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0058 - val_loss: 0.0029 Epoch 17/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0053 - val_loss: 0.0025 Epoch 18/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0049 - val_loss: 0.0022 Epoch 19/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0046 - val_loss: 0.0019 Epoch 20/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0043 - val_loss: 0.0017 Epoch 21/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0040 - val_loss: 0.0015 Epoch 22/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0038 - val_loss: 0.0014 Epoch 23/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0036 - val_loss: 0.0012 Epoch 24/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0034 - val_loss: 0.0011 Epoch 25/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0034 - val_loss: 0.0010 Epoch 26/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0031 - val_loss: 9.1423e-04 Epoch 27/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0032 - val_loss: 8.3368e-04 Epoch 28/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0030 - val_loss: 7.6070e-04 Epoch 29/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0029 - val_loss: 6.9638e-04 Epoch 30/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0029 - val_loss: 6.3789e-04 Epoch 31/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0026 - val_loss: 5.8634e-04 Epoch 32/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0025 - val_loss: 5.4129e-04 Epoch 33/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0025 - val_loss: 5.0193e-04 Epoch 34/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0024 - val_loss: 4.6585e-04 Epoch 35/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0026 - val_loss: 4.3278e-04 Epoch 36/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0025 - val_loss: 4.0122e-04 Epoch 37/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0023 - val_loss: 3.7224e-04 Epoch 38/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0023 - val_loss: 3.4711e-04 Epoch 39/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0024 - val_loss: 3.2297e-04 Epoch 40/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0023 - val_loss: 3.0187e-04 Epoch 41/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0022 - val_loss: 2.8214e-04 Epoch 42/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0022 - val_loss: 2.6374e-04 Epoch 43/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0021 - val_loss: 2.4742e-04 Epoch 44/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0021 - val_loss: 2.3220e-04 Epoch 45/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0021 - val_loss: 2.1794e-04 Epoch 46/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0021 - val_loss: 2.0457e-04 Epoch 47/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0021 - val_loss: 1.9301e-04 Epoch 48/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0021 - val_loss: 1.8222e-04 Epoch 49/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0020 - val_loss: 1.7137e-04 Epoch 50/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0020 - val_loss: 1.6131e-04 Epoch 51/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0020 - val_loss: 1.5250e-04 Epoch 52/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0019 - val_loss: 1.4410e-04 Epoch 53/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - val_loss: 1.3553e-04 Epoch 54/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0019 - val_loss: 1.2841e-04 Epoch 55/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0019 - val_loss: 1.2171e-04 Epoch 56/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0017 - val_loss: 1.1534e-04 Epoch 57/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 1.0894e-04 Epoch 58/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - val_loss: 1.0356e-04 Epoch 59/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - val_loss: 9.8470e-05 Epoch 60/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - val_loss: 9.3593e-05 Epoch 61/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - val_loss: 8.8665e-05 Epoch 62/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 8.4388e-05 Epoch 63/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 8.0216e-05 Epoch 64/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 7.6010e-05 Epoch 65/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 7.2241e-05 Epoch 66/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 6.8794e-05 Epoch 67/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0019 - val_loss: 6.5485e-05 Epoch 68/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 6.2366e-05 Epoch 69/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 5.9365e-05 Epoch 70/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.6573e-05 Epoch 71/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 5.3846e-05 Epoch 72/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 5.1181e-05 Epoch 73/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 4.9046e-05 Epoch 74/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.6749e-05 Epoch 75/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.4709e-05 Epoch 76/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.2847e-05 Epoch 77/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.0799e-05 Epoch 78/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 3.8917e-05 Epoch 79/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.7052e-05 Epoch 80/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.5218e-05 Epoch 81/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 3.3563e-05 Epoch 82/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 3.1980e-05 Epoch 83/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.0456e-05 Epoch 84/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 2.9123e-05 Epoch 85/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 2.7889e-05 Epoch 86/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 2.6626e-05 Epoch 87/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.5355e-05 Epoch 88/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 2.4321e-05 Epoch 89/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 2.3221e-05 Epoch 90/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - val_loss: 2.2215e-05 Epoch 91/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.1258e-05 Epoch 92/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 2.0350e-05 Epoch 93/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.9501e-05 Epoch 94/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.8608e-05 Epoch 95/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.7784e-05 Epoch 96/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.6991e-05 Epoch 97/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 1.6208e-05 Epoch 98/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 1.5493e-05 Epoch 99/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.4815e-05 Epoch 100/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.4256e-05 Epoch 101/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.3619e-05 Epoch 102/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.3053e-05 Epoch 103/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.2543e-05 Epoch 104/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0015 - val_loss: 1.2032e-05 Epoch 105/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.1494e-05 Epoch 106/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.0982e-05 Epoch 107/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 1.0525e-05 Epoch 108/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.0067e-05 Epoch 109/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 9.6681e-06 Epoch 110/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 9.2325e-06 Epoch 111/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 8.8646e-06 Epoch 112/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 8.4478e-06 Epoch 113/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 8.1035e-06 Epoch 114/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 7.7505e-06 Epoch 115/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 7.4488e-06 Epoch 116/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 7.1458e-06 Epoch 117/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 6.8235e-06 Epoch 118/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 6.5519e-06 Epoch 119/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 6.2694e-06 Epoch 120/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 6.0367e-06 Epoch 121/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 5.7826e-06 Epoch 122/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 5.5657e-06 Epoch 123/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 5.3512e-06 Epoch 124/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0015 - val_loss: 5.1076e-06 Epoch 125/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.8717e-06 Epoch 126/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.6505e-06 Epoch 127/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.4597e-06 Epoch 128/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.2675e-06 Epoch 129/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.0721e-06 Epoch 130/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 3.9065e-06 Epoch 131/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 3.7514e-06 Epoch 132/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 3.5981e-06 Epoch 133/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 3.4470e-06 Epoch 134/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 3.3158e-06 Epoch 135/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 3.2018e-06 Epoch 136/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 3.0782e-06 Epoch 137/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0015 - val_loss: 2.9642e-06 Epoch 138/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 2.8636e-06 Epoch 139/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 2.7403e-06 Epoch 140/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0015 - val_loss: 2.6429e-06 Epoch 141/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 2.5386e-06 Epoch 142/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0016 - val_loss: 2.4101e-06 Epoch 143/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0015 - val_loss: 2.3020e-06 Epoch 144/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 2.2047e-06 Epoch 145/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 2.1114e-06 Epoch 146/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0014 - val_loss: 2.0243e-06 Epoch 147/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 1.9287e-06 Epoch 148/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.8479e-06 Epoch 149/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.7782e-06 Epoch 150/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0016 - val_loss: 1.7018e-06 Epoch 151/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.6327e-06 Epoch 152/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.5787e-06 Epoch 153/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0014 - val_loss: 1.5053e-06 Epoch 154/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.4408e-06 Epoch 155/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.3760e-06 Epoch 156/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.3210e-06 Epoch 157/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 1.2692e-06 Epoch 158/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.2180e-06 Epoch 159/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 1.1603e-06 Epoch 160/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.1123e-06 Epoch 161/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0017 - val_loss: 1.0685e-06 Epoch 162/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 1.0309e-06 Epoch 163/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 9.8686e-07 Epoch 164/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0015 - val_loss: 9.4761e-07 Epoch 165/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 9.1209e-07 Epoch 166/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 8.6768e-07 Epoch 167/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 8.2855e-07 Epoch 168/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 7.9354e-07 Epoch 169/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 7.6447e-07 Epoch 170/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 7.3673e-07 Epoch 171/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 7.0491e-07 Epoch 172/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 6.7668e-07 Epoch 173/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 6.4881e-07 Epoch 174/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 6.2444e-07 Epoch 175/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.9879e-07 Epoch 176/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 5.7081e-07 Epoch 177/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.0014 - val_loss: 5.4801e-07 Epoch 178/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.2533e-07 Epoch 179/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 5.0457e-07 Epoch 180/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.8285e-07 Epoch 181/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.6405e-07 Epoch 182/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 4.4756e-07 Epoch 183/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 4.3026e-07 Epoch 184/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.0821e-07 Epoch 185/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.8587e-07 Epoch 186/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.6733e-07 Epoch 187/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 3.5668e-07 Epoch 188/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 3.4247e-07 Epoch 189/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 3.2878e-07 Epoch 190/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.1737e-07 Epoch 191/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 3.0400e-07 Epoch 192/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 2.9275e-07 Epoch 193/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 2.8039e-07 Epoch 194/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.7148e-07 Epoch 195/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 2.6106e-07 Epoch 196/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.5290e-07 Epoch 197/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.4345e-07 Epoch 198/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.3237e-07 Epoch 199/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 2.2165e-07 Epoch 200/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 2.1329e-07 Epoch 201/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 2.0449e-07 Epoch 202/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0015 - val_loss: 1.9606e-07 Epoch 203/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 1.8927e-07 Epoch 204/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.8140e-07 Epoch 205/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 1.7454e-07 Epoch 206/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.6690e-07 Epoch 207/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 1.5791e-07 Epoch 208/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 1.5251e-07 Epoch 209/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.4518e-07 Epoch 210/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.4055e-07 Epoch 211/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.3474e-07 Epoch 212/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.2901e-07 Epoch 213/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.2478e-07 Epoch 214/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 1.1927e-07 Epoch 215/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 1.1564e-07 Epoch 216/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 1.1134e-07 Epoch 217/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 1.0586e-07 Epoch 218/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 1.0147e-07 Epoch 219/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 9.6940e-08 Epoch 220/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 9.3345e-08 Epoch 221/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 8.9323e-08 Epoch 222/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 8.4786e-08 Epoch 223/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 8.2216e-08 Epoch 224/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 7.7746e-08 Epoch 225/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 7.5219e-08 Epoch 226/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 7.1583e-08 Epoch 227/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 6.7920e-08 Epoch 228/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0017 - val_loss: 6.6024e-08 Epoch 229/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 6.4498e-08 Epoch 230/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 6.2107e-08 Epoch 231/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 5.9881e-08 Epoch 232/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.8156e-08 Epoch 233/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 5.5915e-08 Epoch 234/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 5.3553e-08 Epoch 235/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.4616e-08 Epoch 236/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 5.2007e-08 Epoch 237/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.8713e-08 Epoch 238/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0016 - val_loss: 4.8263e-08 Epoch 239/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.5864e-08 Epoch 240/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.4688e-08 Epoch 241/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.2589e-08 Epoch 242/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.3674e-08 Epoch 243/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 4.1762e-08 Epoch 244/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 3.8484e-08 Epoch 245/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 3.8917e-08 Epoch 246/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 3.6972e-08 Epoch 247/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.4483e-08 Epoch 248/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 9.8541e-08 Epoch 249/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.0014 - val_loss: 6.6364e-08 Epoch 250/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 6.9939e-08 Epoch 251/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - val_loss: 4.9061e-08 Epoch 252/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.6114e-08 Epoch 253/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 5.4817e-08 Epoch 254/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - val_loss: 5.4170e-08 Epoch 255/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 4.9316e-08 Epoch 256/256 57/57 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - val_loss: 5.1560e-08 2024-05-09 06:12:53 [INFO] eventdetector_ts.models: Evaluating model FFN_0 on test data 18/18 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.7634e-08 2024-05-09 06:12:53 [INFO] eventdetector_ts.models: The loss value of model FFN_0 on test data is 0.0000 2024-05-09 06:12:53 [INFO] eventdetector_ts.models: Selecting best models based on the min MSE 0.0000 and epsilon 0.0002: 2024-05-09 06:12:53 [INFO] eventdetector_ts.models: Best models selected: dict_keys(['FFN_0']) 2024-05-09 06:12:53 [INFO] eventdetector_ts.metamodel: Saving the best models... 2024-05-09 06:12:53 [INFO] eventdetector_ts.models: Current model to be saved on the disk is FFN_0
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[61], line 9 4 from eventdetector_ts.metamodel.meta_model import MetaModel 5 meta_model = MetaModel(dataset = dataset, events = events, width=2, step=1, 6 output_dir='ser', batch_size=3200, s_h=0.01, models=[(FFN, 1)], 7 hyperparams_ffn=(1, 1, 20, 20, "sigmoid") 8 ) ----> 9 meta_model.fit() File /Users/anaconda3/lib/python3.11/site-packages/eventdetector_ts/metamodel/meta_model.py:471, in MetaModel.fit(self) 464 """ 465 Run prepare_data_and_computing_op, build_stacking_learning, event_extraction_optimization, and plot_save 466 467 Returns: 468 None 469 """ 470 self.prepare_data_and_computing_op() --> 471 self.build_stacking_learning() 472 self.event_extraction_optimization() 473 self.plot_save() File /Users/anaconda3/lib/python3.11/site-packages/eventdetector_ts/metamodel/meta_model.py:419, in MetaModel.build_stacking_learning(self) 417 self.model_trainer.fitting_models(self.model_creator.created_models) 418 logger_meta_model.info("Saving the best models...") --> 419 self.model_trainer.save_best_models(output_dir=self.output_dir) 420 predicted_y, loss, test_y = self.model_trainer.train_meta_model(type_training=self.type_training, 421 hyperparams_mm_network 422 =self.hyperparams_mm_network, 423 output_dir=self.output_dir) 424 self.optimization_data.set_predicted_op(predicted_op=predicted_y) File /Users/anaconda3/lib/python3.11/site-packages/eventdetector_ts/models/models_trainer.py:160, in ModelTrainer.save_best_models(self, output_dir) 158 # Save the model to the specified directory 159 model_path = os.path.join(path, model_name) --> 160 model.save(model_path) 161 logger_models.info("Models saved successfully.") File /Users/anaconda3/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs) 119 filtered_tb = _process_traceback_frames(e.__traceback__) 120 # To get the full stack trace, call: 121 # `keras.config.disable_traceback_filtering()` --> 122 raise e.with_traceback(filtered_tb) from None 123 finally: 124 del filtered_tb File /Users/anaconda3/lib/python3.11/site-packages/keras/src/saving/saving_api.py:106, in save_model(model, filepath, overwrite, **kwargs) 102 legacy_h5_format.save_model_to_hdf5( 103 model, filepath, overwrite, include_optimizer 104 ) 105 else: --> 106 raise ValueError( 107 "Invalid filepath extension for saving. " 108 "Please add either a `.keras` extension for the native Keras " 109 f"format (recommended) or a `.h5` extension. " 110 "Use `model.export(filepath)` if you want to export a SavedModel " 111 "for use with TFLite/TFServing/etc. " 112 f"Received: filepath={filepath}." 113 ) ValueError: Invalid filepath extension for saving. Please add either a `.keras` extension for the native Keras format (recommended) or a `.h5` extension. Use `model.export(filepath)` if you want to export a SavedModel for use with TFLite/TFServing/etc. Received: filepath=/Users/nnthieu/ser/models/FFN_0.