.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "beginner/chatbot_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_chatbot_tutorial.py: Chatbot Tutorial ================ **Author:** `Matthew Inkawhich `_ .. GENERATED FROM PYTHON SOURCE LINES 11-81 In this tutorial, we explore a fun and interesting use-case of recurrent sequence-to-sequence models. We will train a simple chatbot using movie scripts from the `Cornell Movie-Dialogs Corpus `__. Conversational models are a hot topic in artificial intelligence research. Chatbots can be found in a variety of settings, including customer service applications and online helpdesks. These bots are often powered by retrieval-based models, which output predefined responses to questions of certain forms. In a highly restricted domain like a company’s IT helpdesk, these models may be sufficient, however, they are not robust enough for more general use-cases. Teaching a machine to carry out a meaningful conversation with a human in multiple domains is a research question that is far from solved. Recently, the deep learning boom has allowed for powerful generative models like Google’s `Neural Conversational Model `__, which marks a large step towards multi-domain generative conversational models. In this tutorial, we will implement this kind of model in PyTorch. .. figure:: /_static/img/chatbot/bot.png :align: center :alt: bot .. code-block:: python > hello? Bot: hello . > where am I? Bot: you re in a hospital . > who are you? Bot: i m a lawyer . > how are you doing? Bot: i m fine . > are you my friend? Bot: no . > you're under arrest Bot: i m trying to help you ! > i'm just kidding Bot: i m sorry . > where are you from? Bot: san francisco . > it's time for me to leave Bot: i know . > goodbye Bot: goodbye . **Tutorial Highlights** - Handle loading and preprocessing of `Cornell Movie-Dialogs Corpus `__ dataset - Implement a sequence-to-sequence model with `Luong attention mechanism(s) `__ - Jointly train encoder and decoder models using mini-batches - Implement greedy-search decoding module - Interact with trained chatbot **Acknowledgments** This tutorial borrows code from the following sources: 1) Yuan-Kuei Wu’s pytorch-chatbot implementation: https://github.com/ywk991112/pytorch-chatbot 2) Sean Robertson’s practical-pytorch seq2seq-translation example: https://github.com/spro/practical-pytorch/tree/master/seq2seq-translation 3) FloydHub Cornell Movie Corpus preprocessing code: https://github.com/floydhub/textutil-preprocess-cornell-movie-corpus .. GENERATED FROM PYTHON SOURCE LINES 84-88 Preparations ------------ To get started, `download `__ the Movie-Dialogs Corpus zip file. .. GENERATED FROM PYTHON SOURCE LINES 88-117 .. code-block:: default # and put in a ``data/`` directory under the current directory. # # After that, let’s import some necessities. # import torch from torch.jit import script, trace import torch.nn as nn from torch import optim import torch.nn.functional as F import csv import random import re import os import unicodedata import codecs from io import open import itertools import math import json # If the current `accelerator `__ is available, # we will use it. Otherwise, we use the CPU. device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu" print(f"Using {device} device") .. rst-class:: sphx-glr-script-out .. code-block:: none Using cuda device .. GENERATED FROM PYTHON SOURCE LINES 118-140 Load & Preprocess Data ---------------------- The next step is to reformat our data file and load the data into structures that we can work with. The `Cornell Movie-Dialogs Corpus `__ is a rich dataset of movie character dialog: - 220,579 conversational exchanges between 10,292 pairs of movie characters - 9,035 characters from 617 movies - 304,713 total utterances This dataset is large and diverse, and there is a great variation of language formality, time periods, sentiment, etc. Our hope is that this diversity makes our model robust to many forms of inputs and queries. First, we’ll take a look at some lines of our datafile to see the original format. .. GENERATED FROM PYTHON SOURCE LINES 140-153 .. code-block:: default corpus_name = "movie-corpus" corpus = os.path.join("data", corpus_name) def printLines(file, n=10): with open(file, 'rb') as datafile: lines = datafile.readlines() for line in lines[:n]: print(line) printLines(os.path.join(corpus, "utterances.jsonl")) .. rst-class:: sphx-glr-script-out .. code-block:: none b'{"id": "L1045", "conversation_id": "L1044", "text": "They do not!", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "They", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "do", "tag": "VBP", "dep": "ROOT", "dn": [0, 2, 3]}, {"tok": "not", "tag": "RB", "dep": "neg", "up": 1, "dn": []}, {"tok": "!", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": "L1044", "timestamp": null, "vectors": []}\n' b'{"id": "L1044", "conversation_id": "L1044", "text": "They do to!", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "They", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "do", "tag": "VBP", "dep": "ROOT", "dn": [0, 2, 3]}, {"tok": "to", "tag": "TO", "dep": "dobj", "up": 1, "dn": []}, {"tok": "!", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n' b'{"id": "L985", "conversation_id": "L984", "text": "I hope so.", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "I", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "hope", "tag": "VBP", "dep": "ROOT", "dn": [0, 2, 3]}, {"tok": "so", "tag": "RB", "dep": "advmod", "up": 1, "dn": []}, {"tok": ".", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": "L984", "timestamp": null, "vectors": []}\n' b'{"id": "L984", "conversation_id": "L984", "text": "She okay?", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 1, "toks": [{"tok": "She", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "okay", "tag": "RB", "dep": "ROOT", "dn": [0, 2]}, {"tok": "?", "tag": ".", "dep": "punct", "up": 1, "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n' b'{"id": "L925", "conversation_id": "L924", "text": "Let\'s go.", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "Let", "tag": "VB", "dep": "ROOT", "dn": [2, 3]}, {"tok": "\'s", "tag": "PRP", "dep": "nsubj", "up": 2, "dn": []}, {"tok": "go", "tag": "VB", "dep": "ccomp", "up": 0, "dn": [1]}, {"tok": ".", "tag": ".", "dep": "punct", "up": 0, "dn": []}]}]}, "reply-to": "L924", "timestamp": null, "vectors": []}\n' b'{"id": "L924", "conversation_id": "L924", "text": "Wow", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "Wow", "tag": "UH", "dep": "ROOT", "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n' b'{"id": "L872", "conversation_id": "L870", "text": "Okay -- you\'re gonna need to learn how to lie.", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 4, "toks": [{"tok": "Okay", "tag": "UH", "dep": "intj", "up": 4, "dn": []}, {"tok": "--", "tag": ":", "dep": "punct", "up": 4, "dn": []}, {"tok": "you", "tag": "PRP", "dep": "nsubj", "up": 4, "dn": []}, {"tok": "\'re", "tag": "VBP", "dep": "aux", "up": 4, "dn": []}, {"tok": "gon", "tag": "VBG", "dep": "ROOT", "dn": [0, 1, 2, 3, 6, 12]}, {"tok": "na", "tag": "TO", "dep": "aux", "up": 6, "dn": []}, {"tok": "need", "tag": "VB", "dep": "xcomp", "up": 4, "dn": [5, 8]}, {"tok": "to", "tag": "TO", "dep": "aux", "up": 8, "dn": []}, {"tok": "learn", "tag": "VB", "dep": "xcomp", "up": 6, "dn": [7, 11]}, {"tok": "how", "tag": "WRB", "dep": "advmod", "up": 11, "dn": []}, {"tok": "to", "tag": "TO", "dep": "aux", "up": 11, "dn": []}, {"tok": "lie", "tag": "VB", "dep": "xcomp", "up": 8, "dn": [9, 10]}, {"tok": ".", "tag": ".", "dep": "punct", "up": 4, "dn": []}]}]}, "reply-to": "L871", "timestamp": null, "vectors": []}\n' b'{"id": "L871", "conversation_id": "L870", "text": "No", "speaker": "u2", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "No", "tag": "UH", "dep": "ROOT", "dn": []}]}]}, "reply-to": "L870", "timestamp": null, "vectors": []}\n' b'{"id": "L870", "conversation_id": "L870", "text": "I\'m kidding. You know how sometimes you just become this \\"persona\\"? And you don\'t know how to quit?", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 2, "toks": [{"tok": "I", "tag": "PRP", "dep": "nsubj", "up": 2, "dn": []}, {"tok": "\'m", "tag": "VBP", "dep": "aux", "up": 2, "dn": []}, {"tok": "kidding", "tag": "VBG", "dep": "ROOT", "dn": [0, 1, 3]}, {"tok": ".", "tag": ".", "dep": "punct", "up": 2, "dn": [4]}, {"tok": " ", "tag": "_SP", "dep": "", "up": 3, "dn": []}]}, {"rt": 1, "toks": [{"tok": "You", "tag": "PRP", "dep": "nsubj", "up": 1, "dn": []}, {"tok": "know", "tag": "VBP", "dep": "ROOT", "dn": [0, 6, 11]}, {"tok": "how", "tag": "WRB", "dep": "advmod", "up": 3, "dn": []}, {"tok": "sometimes", "tag": "RB", "dep": "advmod", "up": 6, "dn": [2]}, {"tok": "you", "tag": "PRP", "dep": "nsubj", "up": 6, "dn": []}, {"tok": "just", "tag": "RB", "dep": "advmod", "up": 6, "dn": []}, {"tok": "become", "tag": "VBP", "dep": "ccomp", "up": 1, "dn": [3, 4, 5, 9]}, {"tok": "this", "tag": "DT", "dep": "det", "up": 9, "dn": []}, {"tok": "\\"", "tag": "``", "dep": "punct", "up": 9, "dn": []}, {"tok": "persona", "tag": "NN", "dep": "attr", "up": 6, "dn": [7, 8, 10]}, {"tok": "\\"", "tag": "\'\'", "dep": "punct", "up": 9, "dn": []}, {"tok": "?", "tag": ".", "dep": "punct", "up": 1, "dn": [12]}, {"tok": " ", "tag": "_SP", "dep": "", "up": 11, "dn": []}]}, {"rt": 4, "toks": [{"tok": "And", "tag": "CC", "dep": "cc", "up": 4, "dn": []}, {"tok": "you", "tag": "PRP", "dep": "nsubj", "up": 4, "dn": []}, {"tok": "do", "tag": "VBP", "dep": "aux", "up": 4, "dn": []}, {"tok": "n\'t", "tag": "RB", "dep": "neg", "up": 4, "dn": []}, {"tok": "know", "tag": "VB", "dep": "ROOT", "dn": [0, 1, 2, 3, 7, 8]}, {"tok": "how", "tag": "WRB", "dep": "advmod", "up": 7, "dn": []}, {"tok": "to", "tag": "TO", "dep": "aux", "up": 7, "dn": []}, {"tok": "quit", "tag": "VB", "dep": "xcomp", "up": 4, "dn": [5, 6]}, {"tok": "?", "tag": ".", "dep": "punct", "up": 4, "dn": []}]}]}, "reply-to": null, "timestamp": null, "vectors": []}\n' b'{"id": "L869", "conversation_id": "L866", "text": "Like my fear of wearing pastels?", "speaker": "u0", "meta": {"movie_id": "m0", "parsed": [{"rt": 0, "toks": [{"tok": "Like", "tag": "IN", "dep": "ROOT", "dn": [2, 6]}, {"tok": "my", "tag": "PRP$", "dep": "poss", "up": 2, "dn": []}, {"tok": "fear", "tag": "NN", "dep": "pobj", "up": 0, "dn": [1, 3]}, {"tok": "of", "tag": "IN", "dep": "prep", "up": 2, "dn": [4]}, {"tok": "wearing", "tag": "VBG", "dep": "pcomp", "up": 3, "dn": [5]}, {"tok": "pastels", "tag": "NNS", "dep": "dobj", "up": 4, "dn": []}, {"tok": "?", "tag": ".", "dep": "punct", "up": 0, "dn": []}]}]}, "reply-to": "L868", "timestamp": null, "vectors": []}\n' .. GENERATED FROM PYTHON SOURCE LINES 154-169 Create formatted data file ~~~~~~~~~~~~~~~~~~~~~~~~~~ For convenience, we'll create a nicely formatted data file in which each line contains a tab-separated *query sentence* and a *response sentence* pair. The following functions facilitate the parsing of the raw ``utterances.jsonl`` data file. - ``loadLinesAndConversations`` splits each line of the file into a dictionary of lines with fields: ``lineID``, ``characterID``, and text and then groups them into conversations with fields: ``conversationID``, ``movieID``, and lines. - ``extractSentencePairs`` extracts pairs of sentences from conversations .. GENERATED FROM PYTHON SOURCE LINES 169-212 .. code-block:: default # Splits each line of the file to create lines and conversations def loadLinesAndConversations(fileName): lines = {} conversations = {} with open(fileName, 'r', encoding='iso-8859-1') as f: for line in f: lineJson = json.loads(line) # Extract fields for line object lineObj = {} lineObj["lineID"] = lineJson["id"] lineObj["characterID"] = lineJson["speaker"] lineObj["text"] = lineJson["text"] lines[lineObj['lineID']] = lineObj # Extract fields for conversation object if lineJson["conversation_id"] not in conversations: convObj = {} convObj["conversationID"] = lineJson["conversation_id"] convObj["movieID"] = lineJson["meta"]["movie_id"] convObj["lines"] = [lineObj] else: convObj = conversations[lineJson["conversation_id"]] convObj["lines"].insert(0, lineObj) conversations[convObj["conversationID"]] = convObj return lines, conversations # Extracts pairs of sentences from conversations def extractSentencePairs(conversations): qa_pairs = [] for conversation in conversations.values(): # Iterate over all the lines of the conversation for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it) inputLine = conversation["lines"][i]["text"].strip() targetLine = conversation["lines"][i+1]["text"].strip() # Filter wrong samples (if one of the lists is empty) if inputLine and targetLine: qa_pairs.append([inputLine, targetLine]) return qa_pairs .. GENERATED FROM PYTHON SOURCE LINES 213-216 Now we’ll call these functions and create the file. We’ll call it ``formatted_movie_lines.txt``. .. GENERATED FROM PYTHON SOURCE LINES 216-243 .. code-block:: default # Define path to new file datafile = os.path.join(corpus, "formatted_movie_lines.txt") delimiter = '\t' # Unescape the delimiter delimiter = str(codecs.decode(delimiter, "unicode_escape")) # Initialize lines dict and conversations dict lines = {} conversations = {} # Load lines and conversations print("\nProcessing corpus into lines and conversations...") lines, conversations = loadLinesAndConversations(os.path.join(corpus, "utterances.jsonl")) # Write new csv file print("\nWriting newly formatted file...") with open(datafile, 'w', encoding='utf-8') as outputfile: writer = csv.writer(outputfile, delimiter=delimiter, lineterminator='\n') for pair in extractSentencePairs(conversations): writer.writerow(pair) # Print a sample of lines print("\nSample lines from file:") printLines(datafile) .. rst-class:: sphx-glr-script-out .. code-block:: none Processing corpus into lines and conversations... Writing newly formatted file... Sample lines from file: b'They do to!\tThey do not!\n' b'She okay?\tI hope so.\n' b"Wow\tLet's go.\n" b'"I\'m kidding. You know how sometimes you just become this ""persona""? And you don\'t know how to quit?"\tNo\n' b"No\tOkay -- you're gonna need to learn how to lie.\n" b"I figured you'd get to the good stuff eventually.\tWhat good stuff?\n" b'What good stuff?\t"The ""real you""."\n' b'"The ""real you""."\tLike my fear of wearing pastels?\n' b'do you listen to this crap?\tWhat crap?\n' b"What crap?\tMe. This endless ...blonde babble. I'm like, boring myself.\n" .. GENERATED FROM PYTHON SOURCE LINES 244-262 Load and trim data ~~~~~~~~~~~~~~~~~~ Our next order of business is to create a vocabulary and load query/response sentence pairs into memory. Note that we are dealing with sequences of **words**, which do not have an implicit mapping to a discrete numerical space. Thus, we must create one by mapping each unique word that we encounter in our dataset to an index value. For this we define a ``Voc`` class, which keeps a mapping from words to indexes, a reverse mapping of indexes to words, a count of each word and a total word count. The class provides methods for adding a word to the vocabulary (``addWord``), adding all words in a sentence (``addSentence``) and trimming infrequently seen words (``trim``). More on trimming later. .. GENERATED FROM PYTHON SOURCE LINES 262-316 .. code-block:: default # Default word tokens PAD_token = 0 # Used for padding short sentences SOS_token = 1 # Start-of-sentence token EOS_token = 2 # End-of-sentence token class Voc: def __init__(self, name): self.name = name self.trimmed = False self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count SOS, EOS, PAD def addSentence(self, sentence): for word in sentence.split(' '): self.addWord(word) def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.num_words self.word2count[word] = 1 self.index2word[self.num_words] = word self.num_words += 1 else: self.word2count[word] += 1 # Remove words below a certain count threshold def trim(self, min_count): if self.trimmed: return self.trimmed = True keep_words = [] for k, v in self.word2count.items(): if v >= min_count: keep_words.append(k) print('keep_words {} / {} = {:.4f}'.format( len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index) )) # Reinitialize dictionaries self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count default tokens for word in keep_words: self.addWord(word) .. GENERATED FROM PYTHON SOURCE LINES 317-328 Now we can assemble our vocabulary and query/response sentence pairs. Before we are ready to use this data, we must perform some preprocessing. First, we must convert the Unicode strings to ASCII using ``unicodeToAscii``. Next, we should convert all letters to lowercase and trim all non-letter characters except for basic punctuation (``normalizeString``). Finally, to aid in training convergence, we will filter out sentences with length greater than the ``MAX_LENGTH`` threshold (``filterPairs``). .. GENERATED FROM PYTHON SOURCE LINES 328-391 .. code-block:: default MAX_LENGTH = 10 # Maximum sentence length to consider # Turn a Unicode string to plain ASCII, thanks to # https://stackoverflow.com/a/518232/2809427 def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' ) # Lowercase, trim, and remove non-letter characters def normalizeString(s): s = unicodeToAscii(s.lower().strip()) s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) s = re.sub(r"\s+", r" ", s).strip() return s # Read query/response pairs and return a voc object def readVocs(datafile, corpus_name): print("Reading lines...") # Read the file and split into lines lines = open(datafile, encoding='utf-8').\ read().strip().split('\n') # Split every line into pairs and normalize pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines] voc = Voc(corpus_name) return voc, pairs # Returns True if both sentences in a pair 'p' are under the MAX_LENGTH threshold def filterPair(p): # Input sequences need to preserve the last word for EOS token return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH # Filter pairs using the ``filterPair`` condition def filterPairs(pairs): return [pair for pair in pairs if filterPair(pair)] # Using the functions defined above, return a populated voc object and pairs list def loadPrepareData(corpus, corpus_name, datafile, save_dir): print("Start preparing training data ...") voc, pairs = readVocs(datafile, corpus_name) print("Read {!s} sentence pairs".format(len(pairs))) pairs = filterPairs(pairs) print("Trimmed to {!s} sentence pairs".format(len(pairs))) print("Counting words...") for pair in pairs: voc.addSentence(pair[0]) voc.addSentence(pair[1]) print("Counted words:", voc.num_words) return voc, pairs # Load/Assemble voc and pairs save_dir = os.path.join("data", "save") voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir) # Print some pairs to validate print("\npairs:") for pair in pairs[:10]: print(pair) .. rst-class:: sphx-glr-script-out .. code-block:: none Start preparing training data ... Reading lines... Read 221282 sentence pairs Trimmed to 64313 sentence pairs Counting words... Counted words: 18082 pairs: ['they do to !', 'they do not !'] ['she okay ?', 'i hope so .'] ['wow', 'let s go .'] ['what good stuff ?', 'the real you .'] ['the real you .', 'like my fear of wearing pastels ?'] ['do you listen to this crap ?', 'what crap ?'] ['well no . . .', 'then that s all you had to say .'] ['then that s all you had to say .', 'but'] ['but', 'you always been this selfish ?'] ['have fun tonight ?', 'tons'] .. GENERATED FROM PYTHON SOURCE LINES 392-403 Another tactic that is beneficial to achieving faster convergence during training is trimming rarely used words out of our vocabulary. Decreasing the feature space will also soften the difficulty of the function that the model must learn to approximate. We will do this as a two-step process: 1) Trim words used under ``MIN_COUNT`` threshold using the ``voc.trim`` function. 2) Filter out pairs with trimmed words. .. GENERATED FROM PYTHON SOURCE LINES 403-439 .. code-block:: default MIN_COUNT = 3 # Minimum word count threshold for trimming def trimRareWords(voc, pairs, MIN_COUNT): # Trim words used under the MIN_COUNT from the voc voc.trim(MIN_COUNT) # Filter out pairs with trimmed words keep_pairs = [] for pair in pairs: input_sentence = pair[0] output_sentence = pair[1] keep_input = True keep_output = True # Check input sentence for word in input_sentence.split(' '): if word not in voc.word2index: keep_input = False break # Check output sentence for word in output_sentence.split(' '): if word not in voc.word2index: keep_output = False break # Only keep pairs that do not contain trimmed word(s) in their input or output sentence if keep_input and keep_output: keep_pairs.append(pair) print("Trimmed from {} pairs to {}, {:.4f} of total".format(len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs))) return keep_pairs # Trim voc and pairs pairs = trimRareWords(voc, pairs, MIN_COUNT) .. rst-class:: sphx-glr-script-out .. code-block:: none keep_words 7833 / 18079 = 0.4333 Trimmed from 64313 pairs to 53131, 0.8261 of total .. GENERATED FROM PYTHON SOURCE LINES 440-491 Prepare Data for Models ----------------------- Although we have put a great deal of effort into preparing and massaging our data into a nice vocabulary object and list of sentence pairs, our models will ultimately expect numerical torch tensors as inputs. One way to prepare the processed data for the models can be found in the `seq2seq translation tutorial `__. In that tutorial, we use a batch size of 1, meaning that all we have to do is convert the words in our sentence pairs to their corresponding indexes from the vocabulary and feed this to the models. However, if you’re interested in speeding up training and/or would like to leverage GPU parallelization capabilities, you will need to train with mini-batches. Using mini-batches also means that we must be mindful of the variation of sentence length in our batches. To accommodate sentences of different sizes in the same batch, we will make our batched input tensor of shape *(max_length, batch_size)*, where sentences shorter than the *max_length* are zero padded after an *EOS_token*. If we simply convert our English sentences to tensors by converting words to their indexes(\ ``indexesFromSentence``) and zero-pad, our tensor would have shape *(batch_size, max_length)* and indexing the first dimension would return a full sequence across all time-steps. However, we need to be able to index our batch along time, and across all sequences in the batch. Therefore, we transpose our input batch shape to *(max_length, batch_size)*, so that indexing across the first dimension returns a time step across all sentences in the batch. We handle this transpose implicitly in the ``zeroPadding`` function. .. figure:: /_static/img/chatbot/seq2seq_batches.png :align: center :alt: batches The ``inputVar`` function handles the process of converting sentences to tensor, ultimately creating a correctly shaped zero-padded tensor. It also returns a tensor of ``lengths`` for each of the sequences in the batch which will be passed to our decoder later. The ``outputVar`` function performs a similar function to ``inputVar``, but instead of returning a ``lengths`` tensor, it returns a binary mask tensor and a maximum target sentence length. The binary mask tensor has the same shape as the output target tensor, but every element that is a *PAD_token* is 0 and all others are 1. ``batch2TrainData`` simply takes a bunch of pairs and returns the input and target tensors using the aforementioned functions. .. GENERATED FROM PYTHON SOURCE LINES 491-552 .. code-block:: default def indexesFromSentence(voc, sentence): return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token] def zeroPadding(l, fillvalue=PAD_token): return list(itertools.zip_longest(*l, fillvalue=fillvalue)) def binaryMatrix(l, value=PAD_token): m = [] for i, seq in enumerate(l): m.append([]) for token in seq: if token == PAD_token: m[i].append(0) else: m[i].append(1) return m # Returns padded input sequence tensor and lengths def inputVar(l, voc): indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l] lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) padList = zeroPadding(indexes_batch) padVar = torch.LongTensor(padList) return padVar, lengths # Returns padded target sequence tensor, padding mask, and max target length def outputVar(l, voc): indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l] max_target_len = max([len(indexes) for indexes in indexes_batch]) padList = zeroPadding(indexes_batch) mask = binaryMatrix(padList) mask = torch.BoolTensor(mask) padVar = torch.LongTensor(padList) return padVar, mask, max_target_len # Returns all items for a given batch of pairs def batch2TrainData(voc, pair_batch): pair_batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True) input_batch, output_batch = [], [] for pair in pair_batch: input_batch.append(pair[0]) output_batch.append(pair[1]) inp, lengths = inputVar(input_batch, voc) output, mask, max_target_len = outputVar(output_batch, voc) return inp, lengths, output, mask, max_target_len # Example for validation small_batch_size = 5 batches = batch2TrainData(voc, [random.choice(pairs) for _ in range(small_batch_size)]) input_variable, lengths, target_variable, mask, max_target_len = batches print("input_variable:", input_variable) print("lengths:", lengths) print("target_variable:", target_variable) print("mask:", mask) print("max_target_len:", max_target_len) .. rst-class:: sphx-glr-script-out .. code-block:: none input_variable: tensor([[ 67, 24, 36, 175, 11], [ 17, 515, 17, 580, 359], [ 22, 547, 662, 6, 14], [985, 332, 14, 10, 2], [ 28, 14, 2, 2, 0], [ 85, 2, 0, 0, 0], [ 10, 0, 0, 0, 0], [ 2, 0, 0, 0, 0]]) lengths: tensor([8, 6, 5, 5, 4]) target_variable: tensor([[ 175, 11, 85, 220, 11], [ 580, 121, 17, 225, 200], [ 85, 93, 62, 14, 2075], [ 66, 18, 831, 2, 161], [ 10, 5, 14, 0, 62], [ 2, 2029, 2, 0, 235], [ 0, 14, 0, 0, 14], [ 0, 2, 0, 0, 2]]) mask: tensor([[ True, True, True, True, True], [ True, True, True, True, True], [ True, True, True, True, True], [ True, True, True, True, True], [ True, True, True, False, True], [ True, True, True, False, True], [False, True, False, False, True], [False, True, False, False, True]]) max_target_len: 8 .. GENERATED FROM PYTHON SOURCE LINES 553-581 Define Models ------------- Seq2Seq Model ~~~~~~~~~~~~~ The brains of our chatbot is a sequence-to-sequence (seq2seq) model. The goal of a seq2seq model is to take a variable-length sequence as an input, and return a variable-length sequence as an output using a fixed-sized model. `Sutskever et al. `__ discovered that by using two separate recurrent neural nets together, we can accomplish this task. One RNN acts as an **encoder**, which encodes a variable length input sequence to a fixed-length context vector. In theory, this context vector (the final hidden layer of the RNN) will contain semantic information about the query sentence that is input to the bot. The second RNN is a **decoder**, which takes an input word and the context vector, and returns a guess for the next word in the sequence and a hidden state to use in the next iteration. .. figure:: /_static/img/chatbot/seq2seq_ts.png :align: center :alt: model Image source: https://jeddy92.github.io/JEddy92.github.io/ts_seq2seq_intro/ .. GENERATED FROM PYTHON SOURCE LINES 584-650 Encoder ~~~~~~~ The encoder RNN iterates through the input sentence one token (e.g. word) at a time, at each time step outputting an “output” vector and a “hidden state” vector. The hidden state vector is then passed to the next time step, while the output vector is recorded. The encoder transforms the context it saw at each point in the sequence into a set of points in a high-dimensional space, which the decoder will use to generate a meaningful output for the given task. At the heart of our encoder is a multi-layered Gated Recurrent Unit, invented by `Cho et al. `__ in 2014. We will use a bidirectional variant of the GRU, meaning that there are essentially two independent RNNs: one that is fed the input sequence in normal sequential order, and one that is fed the input sequence in reverse order. The outputs of each network are summed at each time step. Using a bidirectional GRU will give us the advantage of encoding both past and future contexts. Bidirectional RNN: .. figure:: /_static/img/chatbot/RNN-bidirectional.png :width: 70% :align: center :alt: rnn_bidir Image source: https://colah.github.io/posts/2015-09-NN-Types-FP/ Note that an ``embedding`` layer is used to encode our word indices in an arbitrarily sized feature space. For our models, this layer will map each word to a feature space of size *hidden_size*. When trained, these values should encode semantic similarity between similar meaning words. Finally, if passing a padded batch of sequences to an RNN module, we must pack and unpack padding around the RNN pass using ``nn.utils.rnn.pack_padded_sequence`` and ``nn.utils.rnn.pad_packed_sequence`` respectively. **Computation Graph:** 1) Convert word indexes to embeddings. 2) Pack padded batch of sequences for RNN module. 3) Forward pass through GRU. 4) Unpack padding. 5) Sum bidirectional GRU outputs. 6) Return output and final hidden state. **Inputs:** - ``input_seq``: batch of input sentences; shape=\ *(max_length, batch_size)* - ``input_lengths``: list of sentence lengths corresponding to each sentence in the batch; shape=\ *(batch_size)* - ``hidden``: hidden state; shape=\ *(n_layers x num_directions, batch_size, hidden_size)* **Outputs:** - ``outputs``: output features from the last hidden layer of the GRU (sum of bidirectional outputs); shape=\ *(max_length, batch_size, hidden_size)* - ``hidden``: updated hidden state from GRU; shape=\ *(n_layers x num_directions, batch_size, hidden_size)* .. GENERATED FROM PYTHON SOURCE LINES 650-678 .. code-block:: default class EncoderRNN(nn.Module): def __init__(self, hidden_size, embedding, n_layers=1, dropout=0): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.embedding = embedding # Initialize GRU; the input_size and hidden_size parameters are both set to 'hidden_size' # because our input size is a word embedding with number of features == hidden_size self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout), bidirectional=True) def forward(self, input_seq, input_lengths, hidden=None): # Convert word indexes to embeddings embedded = self.embedding(input_seq) # Pack padded batch of sequences for RNN module packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) # Forward pass through GRU outputs, hidden = self.gru(packed, hidden) # Unpack padding outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs) # Sum bidirectional GRU outputs outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] # Return output and final hidden state return outputs, hidden .. GENERATED FROM PYTHON SOURCE LINES 679-741 Decoder ~~~~~~~ The decoder RNN generates the response sentence in a token-by-token fashion. It uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an *EOS_token*, representing the end of the sentence. A common problem with a vanilla seq2seq decoder is that if we rely solely on the context vector to encode the entire input sequence’s meaning, it is likely that we will have information loss. This is especially the case when dealing with long input sequences, greatly limiting the capability of our decoder. To combat this, `Bahdanau et al. `__ created an “attention mechanism” that allows the decoder to pay attention to certain parts of the input sequence, rather than using the entire fixed context at every step. At a high level, attention is calculated using the decoder’s current hidden state and the encoder’s outputs. The output attention weights have the same shape as the input sequence, allowing us to multiply them by the encoder outputs, giving us a weighted sum which indicates the parts of encoder output to pay attention to. `Sean Robertson’s `__ figure describes this very well: .. figure:: /_static/img/chatbot/attn2.png :align: center :alt: attn2 `Luong et al. `__ improved upon Bahdanau et al.’s groundwork by creating “Global attention”. The key difference is that with “Global attention”, we consider all of the encoder’s hidden states, as opposed to Bahdanau et al.’s “Local attention”, which only considers the encoder’s hidden state from the current time step. Another difference is that with “Global attention”, we calculate attention weights, or energies, using the hidden state of the decoder from the current time step only. Bahdanau et al.’s attention calculation requires knowledge of the decoder’s state from the previous time step. Also, Luong et al. provides various methods to calculate the attention energies between the encoder output and decoder output which are called “score functions”: .. figure:: /_static/img/chatbot/scores.png :width: 60% :align: center :alt: scores where :math:`h_t` = current target decoder state and :math:`\bar{h}_s` = all encoder states. Overall, the Global attention mechanism can be summarized by the following figure. Note that we will implement the “Attention Layer” as a separate ``nn.Module`` called ``Attn``. The output of this module is a softmax normalized weights tensor of shape *(batch_size, 1, max_length)*. .. figure:: /_static/img/chatbot/global_attn.png :align: center :width: 60% :alt: global_attn .. GENERATED FROM PYTHON SOURCE LINES 741-783 .. code-block:: default # Luong attention layer class Attn(nn.Module): def __init__(self, method, hidden_size): super(Attn, self).__init__() self.method = method if self.method not in ['dot', 'general', 'concat']: raise ValueError(self.method, "is not an appropriate attention method.") self.hidden_size = hidden_size if self.method == 'general': self.attn = nn.Linear(self.hidden_size, hidden_size) elif self.method == 'concat': self.attn = nn.Linear(self.hidden_size * 2, hidden_size) self.v = nn.Parameter(torch.FloatTensor(hidden_size)) def dot_score(self, hidden, encoder_output): return torch.sum(hidden * encoder_output, dim=2) def general_score(self, hidden, encoder_output): energy = self.attn(encoder_output) return torch.sum(hidden * energy, dim=2) def concat_score(self, hidden, encoder_output): energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh() return torch.sum(self.v * energy, dim=2) def forward(self, hidden, encoder_outputs): # Calculate the attention weights (energies) based on the given method if self.method == 'general': attn_energies = self.general_score(hidden, encoder_outputs) elif self.method == 'concat': attn_energies = self.concat_score(hidden, encoder_outputs) elif self.method == 'dot': attn_energies = self.dot_score(hidden, encoder_outputs) # Transpose max_length and batch_size dimensions attn_energies = attn_energies.t() # Return the softmax normalized probability scores (with added dimension) return F.softmax(attn_energies, dim=1).unsqueeze(1) .. GENERATED FROM PYTHON SOURCE LINES 784-816 Now that we have defined our attention submodule, we can implement the actual decoder model. For the decoder, we will manually feed our batch one time step at a time. This means that our embedded word tensor and GRU output will both have shape *(1, batch_size, hidden_size)*. **Computation Graph:** 1) Get embedding of current input word. 2) Forward through unidirectional GRU. 3) Calculate attention weights from the current GRU output from (2). 4) Multiply attention weights to encoder outputs to get new "weighted sum" context vector. 5) Concatenate weighted context vector and GRU output using Luong eq. 5. 6) Predict next word using Luong eq. 6 (without softmax). 7) Return output and final hidden state. **Inputs:** - ``input_step``: one time step (one word) of input sequence batch; shape=\ *(1, batch_size)* - ``last_hidden``: final hidden layer of GRU; shape=\ *(n_layers x num_directions, batch_size, hidden_size)* - ``encoder_outputs``: encoder model’s output; shape=\ *(max_length, batch_size, hidden_size)* **Outputs:** - ``output``: softmax normalized tensor giving probabilities of each word being the correct next word in the decoded sequence; shape=\ *(batch_size, voc.num_words)* - ``hidden``: final hidden state of GRU; shape=\ *(n_layers x num_directions, batch_size, hidden_size)* .. GENERATED FROM PYTHON SOURCE LINES 816-860 .. code-block:: default class LuongAttnDecoderRNN(nn.Module): def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1): super(LuongAttnDecoderRNN, self).__init__() # Keep for reference self.attn_model = attn_model self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout = dropout # Define layers self.embedding = embedding self.embedding_dropout = nn.Dropout(dropout) self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout)) self.concat = nn.Linear(hidden_size * 2, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.attn = Attn(attn_model, hidden_size) def forward(self, input_step, last_hidden, encoder_outputs): # Note: we run this one step (word) at a time # Get embedding of current input word embedded = self.embedding(input_step) embedded = self.embedding_dropout(embedded) # Forward through unidirectional GRU rnn_output, hidden = self.gru(embedded, last_hidden) # Calculate attention weights from the current GRU output attn_weights = self.attn(rnn_output, encoder_outputs) # Multiply attention weights to encoder outputs to get new "weighted sum" context vector context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) # Concatenate weighted context vector and GRU output using Luong eq. 5 rnn_output = rnn_output.squeeze(0) context = context.squeeze(1) concat_input = torch.cat((rnn_output, context), 1) concat_output = torch.tanh(self.concat(concat_input)) # Predict next word using Luong eq. 6 output = self.out(concat_output) output = F.softmax(output, dim=1) # Return output and final hidden state return output, hidden .. GENERATED FROM PYTHON SOURCE LINES 861-875 Define Training Procedure ------------------------- Masked loss ~~~~~~~~~~~ Since we are dealing with batches of padded sequences, we cannot simply consider all elements of the tensor when calculating loss. We define ``maskNLLLoss`` to calculate our loss based on our decoder’s output tensor, the target tensor, and a binary mask tensor describing the padding of the target tensor. This loss function calculates the average negative log likelihood of the elements that correspond to a *1* in the mask tensor. .. GENERATED FROM PYTHON SOURCE LINES 875-884 .. code-block:: default def maskNLLLoss(inp, target, mask): nTotal = mask.sum() crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1)) loss = crossEntropy.masked_select(mask).mean() loss = loss.to(device) return loss, nTotal.item() .. GENERATED FROM PYTHON SOURCE LINES 885-944 Single training iteration ~~~~~~~~~~~~~~~~~~~~~~~~~ The ``train`` function contains the algorithm for a single training iteration (a single batch of inputs). We will use a couple of clever tricks to aid in convergence: - The first trick is using **teacher forcing**. This means that at some probability, set by ``teacher_forcing_ratio``, we use the current target word as the decoder’s next input rather than using the decoder’s current guess. This technique acts as training wheels for the decoder, aiding in more efficient training. However, teacher forcing can lead to model instability during inference, as the decoder may not have a sufficient chance to truly craft its own output sequences during training. Thus, we must be mindful of how we are setting the ``teacher_forcing_ratio``, and not be fooled by fast convergence. - The second trick that we implement is **gradient clipping**. This is a commonly used technique for countering the “exploding gradient” problem. In essence, by clipping or thresholding gradients to a maximum value, we prevent the gradients from growing exponentially and either overflow (NaN), or overshoot steep cliffs in the cost function. .. figure:: /_static/img/chatbot/grad_clip.png :align: center :width: 60% :alt: grad_clip Image source: Goodfellow et al. *Deep Learning*. 2016. https://www.deeplearningbook.org/ **Sequence of Operations:** 1) Forward pass entire input batch through encoder. 2) Initialize decoder inputs as SOS_token, and hidden state as the encoder's final hidden state. 3) Forward input batch sequence through decoder one time step at a time. 4) If teacher forcing: set next decoder input as the current target; else: set next decoder input as current decoder output. 5) Calculate and accumulate loss. 6) Perform backpropagation. 7) Clip gradients. 8) Update encoder and decoder model parameters. .. Note :: PyTorch’s RNN modules (``RNN``, ``LSTM``, ``GRU``) can be used like any other non-recurrent layers by simply passing them the entire input sequence (or batch of sequences). We use the ``GRU`` layer like this in the ``encoder``. The reality is that under the hood, there is an iterative process looping over each time step calculating hidden states. Alternatively, you can run these modules one time-step at a time. In this case, we manually loop over the sequences during the training process like we must do for the ``decoder`` model. As long as you maintain the correct conceptual model of these modules, implementing sequential models can be very straightforward. .. GENERATED FROM PYTHON SOURCE LINES 944-1020 .. code-block:: default def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip, max_length=MAX_LENGTH): # Zero gradients encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() # Set device options input_variable = input_variable.to(device) target_variable = target_variable.to(device) mask = mask.to(device) # Lengths for RNN packing should always be on the CPU lengths = lengths.to("cpu") # Initialize variables loss = 0 print_losses = [] n_totals = 0 # Forward pass through encoder encoder_outputs, encoder_hidden = encoder(input_variable, lengths) # Create initial decoder input (start with SOS tokens for each sentence) decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]]) decoder_input = decoder_input.to(device) # Set initial decoder hidden state to the encoder's final hidden state decoder_hidden = encoder_hidden[:decoder.n_layers] # Determine if we are using teacher forcing this iteration use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False # Forward batch of sequences through decoder one time step at a time if use_teacher_forcing: for t in range(max_target_len): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden, encoder_outputs ) # Teacher forcing: next input is current target decoder_input = target_variable[t].view(1, -1) # Calculate and accumulate loss mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal else: for t in range(max_target_len): decoder_output, decoder_hidden = decoder( decoder_input, decoder_hidden, encoder_outputs ) # No teacher forcing: next input is decoder's own current output _, topi = decoder_output.topk(1) decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]]) decoder_input = decoder_input.to(device) # Calculate and accumulate loss mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal # Perform backpropagation loss.backward() # Clip gradients: gradients are modified in place _ = nn.utils.clip_grad_norm_(encoder.parameters(), clip) _ = nn.utils.clip_grad_norm_(decoder.parameters(), clip) # Adjust model weights encoder_optimizer.step() decoder_optimizer.step() return sum(print_losses) / n_totals .. GENERATED FROM PYTHON SOURCE LINES 1021-1037 Training iterations ~~~~~~~~~~~~~~~~~~~ It is finally time to tie the full training procedure together with the data. The ``trainIters`` function is responsible for running ``n_iterations`` of training given the passed models, optimizers, data, etc. This function is quite self explanatory, as we have done the heavy lifting with the ``train`` function. One thing to note is that when we save our model, we save a tarball containing the encoder and decoder ``state_dicts`` (parameters), the optimizers’ ``state_dicts``, the loss, the iteration, etc. Saving the model in this way will give us the ultimate flexibility with the checkpoint. After loading a checkpoint, we will be able to use the model parameters to run inference, or we can continue training right where we left off. .. GENERATED FROM PYTHON SOURCE LINES 1037-1086 .. code-block:: default def trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename): # Load batches for each iteration training_batches = [batch2TrainData(voc, [random.choice(pairs) for _ in range(batch_size)]) for _ in range(n_iteration)] # Initializations print('Initializing ...') start_iteration = 1 print_loss = 0 if loadFilename: start_iteration = checkpoint['iteration'] + 1 # Training loop print("Training...") for iteration in range(start_iteration, n_iteration + 1): training_batch = training_batches[iteration - 1] # Extract fields from batch input_variable, lengths, target_variable, mask, max_target_len = training_batch # Run a training iteration with batch loss = train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip) print_loss += loss # Print progress if iteration % print_every == 0: print_loss_avg = print_loss / print_every print("Iteration: {}; Percent complete: {:.1f}%; Average loss: {:.4f}".format(iteration, iteration / n_iteration * 100, print_loss_avg)) print_loss = 0 # Save checkpoint if (iteration % save_every == 0): directory = os.path.join(save_dir, model_name, corpus_name, '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size)) if not os.path.exists(directory): os.makedirs(directory) torch.save({ 'iteration': iteration, 'en': encoder.state_dict(), 'de': decoder.state_dict(), 'en_opt': encoder_optimizer.state_dict(), 'de_opt': decoder_optimizer.state_dict(), 'loss': loss, 'voc_dict': voc.__dict__, 'embedding': embedding.state_dict() }, os.path.join(directory, '{}_{}.tar'.format(iteration, 'checkpoint'))) .. GENERATED FROM PYTHON SOURCE LINES 1087-1122 Define Evaluation ----------------- After training a model, we want to be able to talk to the bot ourselves. First, we must define how we want the model to decode the encoded input. Greedy decoding ~~~~~~~~~~~~~~~ Greedy decoding is the decoding method that we use during training when we are **NOT** using teacher forcing. In other words, for each time step, we simply choose the word from ``decoder_output`` with the highest softmax value. This decoding method is optimal on a single time-step level. To facilitate the greedy decoding operation, we define a ``GreedySearchDecoder`` class. When run, an object of this class takes an input sequence (``input_seq``) of shape *(input_seq length, 1)*, a scalar input length (``input_length``) tensor, and a ``max_length`` to bound the response sentence length. The input sentence is evaluated using the following computational graph: **Computation Graph:** 1) Forward input through encoder model. 2) Prepare encoder's final hidden layer to be first hidden input to the decoder. 3) Initialize decoder's first input as SOS_token. 4) Initialize tensors to append decoded words to. 5) Iteratively decode one word token at a time: a) Forward pass through decoder. b) Obtain most likely word token and its softmax score. c) Record token and score. d) Prepare current token to be next decoder input. 6) Return collections of word tokens and scores. .. GENERATED FROM PYTHON SOURCE LINES 1122-1154 .. code-block:: default class GreedySearchDecoder(nn.Module): def __init__(self, encoder, decoder): super(GreedySearchDecoder, self).__init__() self.encoder = encoder self.decoder = decoder def forward(self, input_seq, input_length, max_length): # Forward input through encoder model encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length) # Prepare encoder's final hidden layer to be first hidden input to the decoder decoder_hidden = encoder_hidden[:self.decoder.n_layers] # Initialize decoder input with SOS_token decoder_input = torch.ones(1, 1, device=device, dtype=torch.long) * SOS_token # Initialize tensors to append decoded words to all_tokens = torch.zeros([0], device=device, dtype=torch.long) all_scores = torch.zeros([0], device=device) # Iteratively decode one word token at a time for _ in range(max_length): # Forward pass through decoder decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs) # Obtain most likely word token and its softmax score decoder_scores, decoder_input = torch.max(decoder_output, dim=1) # Record token and score all_tokens = torch.cat((all_tokens, decoder_input), dim=0) all_scores = torch.cat((all_scores, decoder_scores), dim=0) # Prepare current token to be next decoder input (add a dimension) decoder_input = torch.unsqueeze(decoder_input, 0) # Return collections of word tokens and scores return all_tokens, all_scores .. GENERATED FROM PYTHON SOURCE LINES 1155-1183 Evaluate my text ~~~~~~~~~~~~~~~~ Now that we have our decoding method defined, we can write functions for evaluating a string input sentence. The ``evaluate`` function manages the low-level process of handling the input sentence. We first format the sentence as an input batch of word indexes with *batch_size==1*. We do this by converting the words of the sentence to their corresponding indexes, and transposing the dimensions to prepare the tensor for our models. We also create a ``lengths`` tensor which contains the length of our input sentence. In this case, ``lengths`` is scalar because we are only evaluating one sentence at a time (batch_size==1). Next, we obtain the decoded response sentence tensor using our ``GreedySearchDecoder`` object (``searcher``). Finally, we convert the response’s indexes to words and return the list of decoded words. ``evaluateInput`` acts as the user interface for our chatbot. When called, an input text field will spawn in which we can enter our query sentence. After typing our input sentence and pressing *Enter*, our text is normalized in the same way as our training data, and is ultimately fed to the ``evaluate`` function to obtain a decoded output sentence. We loop this process, so we can keep chatting with our bot until we enter either “q” or “quit”. Finally, if a sentence is entered that contains a word that is not in the vocabulary, we handle this gracefully by printing an error message and prompting the user to enter another sentence. .. GENERATED FROM PYTHON SOURCE LINES 1183-1222 .. code-block:: default def evaluate(encoder, decoder, searcher, voc, sentence, max_length=MAX_LENGTH): ### Format input sentence as a batch # words -> indexes indexes_batch = [indexesFromSentence(voc, sentence)] # Create lengths tensor lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) # Transpose dimensions of batch to match models' expectations input_batch = torch.LongTensor(indexes_batch).transpose(0, 1) # Use appropriate device input_batch = input_batch.to(device) lengths = lengths.to("cpu") # Decode sentence with searcher tokens, scores = searcher(input_batch, lengths, max_length) # indexes -> words decoded_words = [voc.index2word[token.item()] for token in tokens] return decoded_words def evaluateInput(encoder, decoder, searcher, voc): input_sentence = '' while(1): try: # Get input sentence input_sentence = input('> ') # Check if it is quit case if input_sentence == 'q' or input_sentence == 'quit': break # Normalize sentence input_sentence = normalizeString(input_sentence) # Evaluate sentence output_words = evaluate(encoder, decoder, searcher, voc, input_sentence) # Format and print response sentence output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')] print('Bot:', ' '.join(output_words)) except KeyError: print("Error: Encountered unknown word.") .. GENERATED FROM PYTHON SOURCE LINES 1223-1235 Run Model --------- Finally, it is time to run our model! Regardless of whether we want to train or test the chatbot model, we must initialize the individual encoder and decoder models. In the following block, we set our desired configurations, choose to start from scratch or set a checkpoint to load from, and build and initialize the models. Feel free to play with different model configurations to optimize performance. .. GENERATED FROM PYTHON SOURCE LINES 1235-1251 .. code-block:: default # Configure models model_name = 'cb_model' attn_model = 'dot' #``attn_model = 'general'`` #``attn_model = 'concat'`` hidden_size = 500 encoder_n_layers = 2 decoder_n_layers = 2 dropout = 0.1 batch_size = 64 # Set checkpoint to load from; set to None if starting from scratch loadFilename = None checkpoint_iter = 4000 .. GENERATED FROM PYTHON SOURCE LINES 1252-1259 Sample code to load from a checkpoint: .. code-block:: python loadFilename = os.path.join(save_dir, model_name, corpus_name, '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size), '{}_checkpoint.tar'.format(checkpoint_iter)) .. GENERATED FROM PYTHON SOURCE LINES 1259-1291 .. code-block:: default # Load model if a ``loadFilename`` is provided if loadFilename: # If loading on same machine the model was trained on checkpoint = torch.load(loadFilename) # If loading a model trained on GPU to CPU #checkpoint = torch.load(loadFilename, map_location=torch.device('cpu')) encoder_sd = checkpoint['en'] decoder_sd = checkpoint['de'] encoder_optimizer_sd = checkpoint['en_opt'] decoder_optimizer_sd = checkpoint['de_opt'] embedding_sd = checkpoint['embedding'] voc.__dict__ = checkpoint['voc_dict'] print('Building encoder and decoder ...') # Initialize word embeddings embedding = nn.Embedding(voc.num_words, hidden_size) if loadFilename: embedding.load_state_dict(embedding_sd) # Initialize encoder & decoder models encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout) decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout) if loadFilename: encoder.load_state_dict(encoder_sd) decoder.load_state_dict(decoder_sd) # Use appropriate device encoder = encoder.to(device) decoder = decoder.to(device) print('Models built and ready to go!') .. rst-class:: sphx-glr-script-out .. code-block:: none Building encoder and decoder ... Models built and ready to go! .. GENERATED FROM PYTHON SOURCE LINES 1292-1301 Run Training ~~~~~~~~~~~~ Run the following block if you want to train the model. First we set training parameters, then we initialize our optimizers, and finally we call the ``trainIters`` function to run our training iterations. .. GENERATED FROM PYTHON SOURCE LINES 1301-1341 .. code-block:: default # Configure training/optimization clip = 50.0 teacher_forcing_ratio = 1.0 learning_rate = 0.0001 decoder_learning_ratio = 5.0 n_iteration = 4000 print_every = 1 save_every = 500 # Ensure dropout layers are in train mode encoder.train() decoder.train() # Initialize optimizers print('Building optimizers ...') encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio) if loadFilename: encoder_optimizer.load_state_dict(encoder_optimizer_sd) decoder_optimizer.load_state_dict(decoder_optimizer_sd) # If you have an accelerator, configure it to call for state in encoder_optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): state[k] = v.to(device) for state in decoder_optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): state[k] = v.to(device) # Run training iterations print("Starting Training!") trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename) .. rst-class:: sphx-glr-script-out .. code-block:: none Building optimizers ... Starting Training! Initializing ... Training... Iteration: 1; Percent complete: 0.0%; Average loss: 8.9763 Iteration: 2; Percent complete: 0.1%; Average loss: 8.8524 Iteration: 3; Percent complete: 0.1%; Average loss: 8.6602 Iteration: 4; Percent complete: 0.1%; Average loss: 8.3726 Iteration: 5; Percent complete: 0.1%; Average loss: 8.0231 Iteration: 6; Percent complete: 0.1%; Average loss: 7.3824 Iteration: 7; Percent complete: 0.2%; Average loss: 7.1809 Iteration: 8; Percent complete: 0.2%; Average loss: 7.1986 Iteration: 9; Percent complete: 0.2%; Average loss: 7.1499 Iteration: 10; Percent complete: 0.2%; Average loss: 6.8187 Iteration: 11; Percent complete: 0.3%; Average loss: 6.2910 Iteration: 12; Percent complete: 0.3%; Average loss: 6.1314 Iteration: 13; Percent complete: 0.3%; Average loss: 5.6805 Iteration: 14; Percent complete: 0.4%; Average loss: 5.5938 Iteration: 15; Percent complete: 0.4%; Average loss: 5.5549 Iteration: 16; Percent complete: 0.4%; Average loss: 5.7334 Iteration: 17; Percent complete: 0.4%; Average loss: 5.3979 Iteration: 18; Percent complete: 0.4%; Average loss: 5.0782 Iteration: 19; Percent complete: 0.5%; Average loss: 5.1751 Iteration: 20; Percent complete: 0.5%; Average loss: 4.8445 Iteration: 21; Percent complete: 0.5%; Average loss: 4.9244 Iteration: 22; Percent complete: 0.5%; Average loss: 5.1029 Iteration: 23; Percent complete: 0.6%; Average loss: 4.9813 Iteration: 24; Percent complete: 0.6%; Average loss: 4.8267 Iteration: 25; Percent complete: 0.6%; Average loss: 4.7032 Iteration: 26; Percent complete: 0.7%; Average loss: 4.9154 Iteration: 27; Percent complete: 0.7%; Average loss: 4.8375 Iteration: 28; Percent complete: 0.7%; Average loss: 4.8363 Iteration: 29; Percent complete: 0.7%; Average loss: 4.7875 Iteration: 30; Percent complete: 0.8%; Average loss: 4.5903 Iteration: 31; Percent complete: 0.8%; Average loss: 4.7914 Iteration: 32; Percent complete: 0.8%; Average loss: 4.7306 Iteration: 33; Percent complete: 0.8%; Average loss: 4.7768 Iteration: 34; Percent complete: 0.9%; Average loss: 4.6112 Iteration: 35; Percent complete: 0.9%; Average loss: 4.8643 Iteration: 36; Percent complete: 0.9%; Average loss: 4.8967 Iteration: 37; Percent complete: 0.9%; Average loss: 4.8197 Iteration: 38; Percent complete: 0.9%; Average loss: 4.9525 Iteration: 39; Percent complete: 1.0%; Average loss: 4.6605 Iteration: 40; Percent complete: 1.0%; Average loss: 4.6970 Iteration: 41; Percent complete: 1.0%; Average loss: 4.6784 Iteration: 42; Percent complete: 1.1%; Average loss: 4.9811 Iteration: 43; Percent complete: 1.1%; Average loss: 4.6028 Iteration: 44; Percent complete: 1.1%; Average loss: 4.8165 Iteration: 45; Percent complete: 1.1%; Average loss: 4.7861 Iteration: 46; Percent complete: 1.1%; Average loss: 4.6582 Iteration: 47; Percent complete: 1.2%; Average loss: 4.5843 Iteration: 48; Percent complete: 1.2%; Average loss: 4.4336 Iteration: 49; Percent complete: 1.2%; Average loss: 4.7444 Iteration: 50; Percent complete: 1.2%; Average loss: 4.7109 Iteration: 51; Percent complete: 1.3%; Average loss: 4.8552 Iteration: 52; Percent complete: 1.3%; Average loss: 4.4221 Iteration: 53; Percent complete: 1.3%; Average loss: 4.4389 Iteration: 54; Percent complete: 1.4%; Average loss: 4.5182 Iteration: 55; Percent complete: 1.4%; Average loss: 4.7054 Iteration: 56; Percent complete: 1.4%; Average loss: 4.8727 Iteration: 57; Percent complete: 1.4%; Average loss: 4.5526 Iteration: 58; Percent complete: 1.5%; Average loss: 4.5380 Iteration: 59; Percent complete: 1.5%; Average loss: 4.6970 Iteration: 60; Percent complete: 1.5%; Average loss: 4.6878 Iteration: 61; Percent complete: 1.5%; Average loss: 4.4542 Iteration: 62; Percent complete: 1.6%; Average loss: 4.3693 Iteration: 63; Percent complete: 1.6%; Average loss: 4.5317 Iteration: 64; Percent complete: 1.6%; Average loss: 4.7330 Iteration: 65; Percent complete: 1.6%; Average loss: 4.8533 Iteration: 66; Percent complete: 1.7%; Average loss: 4.4888 Iteration: 67; Percent complete: 1.7%; Average loss: 4.3847 Iteration: 68; Percent complete: 1.7%; Average loss: 4.4511 Iteration: 69; Percent complete: 1.7%; Average loss: 4.5700 Iteration: 70; Percent complete: 1.8%; Average loss: 4.5870 Iteration: 71; Percent complete: 1.8%; Average loss: 4.5564 Iteration: 72; Percent complete: 1.8%; Average loss: 4.4373 Iteration: 73; Percent complete: 1.8%; Average loss: 4.4358 Iteration: 74; Percent complete: 1.8%; Average loss: 4.6424 Iteration: 75; Percent complete: 1.9%; Average loss: 4.5800 Iteration: 76; Percent complete: 1.9%; Average loss: 4.5893 Iteration: 77; Percent complete: 1.9%; Average loss: 4.4470 Iteration: 78; Percent complete: 1.9%; Average loss: 4.4061 Iteration: 79; Percent complete: 2.0%; Average loss: 4.3593 Iteration: 80; Percent complete: 2.0%; Average loss: 4.3428 Iteration: 81; Percent complete: 2.0%; Average loss: 4.5322 Iteration: 82; Percent complete: 2.1%; Average loss: 4.5675 Iteration: 83; Percent complete: 2.1%; Average loss: 4.6740 Iteration: 84; Percent complete: 2.1%; Average loss: 4.4751 Iteration: 85; Percent complete: 2.1%; Average loss: 4.2682 Iteration: 86; Percent complete: 2.1%; Average loss: 4.5451 Iteration: 87; Percent complete: 2.2%; Average loss: 4.3931 Iteration: 88; Percent complete: 2.2%; Average loss: 4.4647 Iteration: 89; Percent complete: 2.2%; Average loss: 4.3334 Iteration: 90; Percent complete: 2.2%; Average loss: 4.5801 Iteration: 91; Percent complete: 2.3%; Average loss: 4.5684 Iteration: 92; Percent complete: 2.3%; Average loss: 4.4429 Iteration: 93; Percent complete: 2.3%; Average loss: 4.5646 Iteration: 94; Percent complete: 2.4%; Average loss: 4.3298 Iteration: 95; Percent complete: 2.4%; Average loss: 4.4139 Iteration: 96; Percent complete: 2.4%; Average loss: 4.3821 Iteration: 97; Percent complete: 2.4%; Average loss: 4.5593 Iteration: 98; Percent complete: 2.5%; Average loss: 4.4570 Iteration: 99; Percent complete: 2.5%; Average loss: 4.5422 Iteration: 100; Percent complete: 2.5%; Average loss: 4.6041 Iteration: 101; Percent complete: 2.5%; Average loss: 4.4118 Iteration: 102; Percent complete: 2.5%; Average loss: 4.4319 Iteration: 103; Percent complete: 2.6%; Average loss: 4.2442 Iteration: 104; Percent complete: 2.6%; Average loss: 4.2047 Iteration: 105; Percent complete: 2.6%; Average loss: 4.4759 Iteration: 106; Percent complete: 2.6%; Average loss: 4.3963 Iteration: 107; Percent complete: 2.7%; Average loss: 4.2604 Iteration: 108; Percent complete: 2.7%; Average loss: 4.2044 Iteration: 109; Percent complete: 2.7%; Average loss: 4.6522 Iteration: 110; Percent complete: 2.8%; Average loss: 4.4362 Iteration: 111; Percent complete: 2.8%; Average loss: 4.1946 Iteration: 112; Percent complete: 2.8%; Average loss: 4.2337 Iteration: 113; Percent complete: 2.8%; Average loss: 4.0468 Iteration: 114; Percent complete: 2.9%; Average loss: 4.4906 Iteration: 115; Percent complete: 2.9%; Average loss: 4.2339 Iteration: 116; Percent complete: 2.9%; Average loss: 4.4727 Iteration: 117; Percent complete: 2.9%; Average loss: 4.5092 Iteration: 118; Percent complete: 2.9%; Average loss: 4.1954 Iteration: 119; Percent complete: 3.0%; Average loss: 4.1111 Iteration: 120; Percent complete: 3.0%; Average loss: 4.4645 Iteration: 121; Percent complete: 3.0%; Average loss: 4.1416 Iteration: 122; Percent complete: 3.0%; Average loss: 4.3262 Iteration: 123; Percent complete: 3.1%; Average loss: 4.4004 Iteration: 124; Percent complete: 3.1%; Average loss: 4.2498 Iteration: 125; Percent complete: 3.1%; Average loss: 4.0797 Iteration: 126; Percent complete: 3.1%; Average loss: 4.2424 Iteration: 127; Percent complete: 3.2%; Average loss: 4.4205 Iteration: 128; Percent complete: 3.2%; Average loss: 4.1744 Iteration: 129; Percent complete: 3.2%; Average loss: 4.2018 Iteration: 130; Percent complete: 3.2%; Average loss: 4.1921 Iteration: 131; Percent complete: 3.3%; Average loss: 4.2020 Iteration: 132; Percent complete: 3.3%; Average loss: 4.1845 Iteration: 133; Percent complete: 3.3%; Average loss: 4.1241 Iteration: 134; Percent complete: 3.4%; Average loss: 4.0416 Iteration: 135; Percent complete: 3.4%; Average loss: 4.0688 Iteration: 136; Percent complete: 3.4%; Average loss: 4.2372 Iteration: 137; Percent complete: 3.4%; Average loss: 4.1073 Iteration: 138; Percent complete: 3.5%; Average loss: 4.3120 Iteration: 139; Percent complete: 3.5%; Average loss: 3.9801 Iteration: 140; Percent complete: 3.5%; Average loss: 4.3660 Iteration: 141; Percent complete: 3.5%; Average loss: 4.2574 Iteration: 142; Percent complete: 3.5%; Average loss: 4.3476 Iteration: 143; Percent complete: 3.6%; Average loss: 4.2539 Iteration: 144; Percent complete: 3.6%; Average loss: 4.3850 Iteration: 145; Percent complete: 3.6%; Average loss: 4.2591 Iteration: 146; Percent complete: 3.6%; Average loss: 4.2121 Iteration: 147; Percent complete: 3.7%; Average loss: 4.3383 Iteration: 148; Percent complete: 3.7%; Average loss: 4.2215 Iteration: 149; Percent complete: 3.7%; Average loss: 4.0665 Iteration: 150; Percent complete: 3.8%; Average loss: 4.0148 Iteration: 151; Percent complete: 3.8%; Average loss: 4.1003 Iteration: 152; Percent complete: 3.8%; Average loss: 3.9638 Iteration: 153; Percent complete: 3.8%; Average loss: 4.4568 Iteration: 154; Percent complete: 3.9%; Average loss: 4.2518 Iteration: 155; Percent complete: 3.9%; Average loss: 4.0042 Iteration: 156; Percent complete: 3.9%; Average loss: 3.9722 Iteration: 157; Percent complete: 3.9%; Average loss: 4.3565 Iteration: 158; Percent complete: 4.0%; Average loss: 4.0610 Iteration: 159; Percent complete: 4.0%; Average loss: 4.1251 Iteration: 160; Percent complete: 4.0%; Average loss: 4.3025 Iteration: 161; Percent complete: 4.0%; Average loss: 4.0600 Iteration: 162; Percent complete: 4.0%; Average loss: 4.0424 Iteration: 163; Percent complete: 4.1%; Average loss: 3.9324 Iteration: 164; Percent complete: 4.1%; Average loss: 3.9302 Iteration: 165; Percent complete: 4.1%; Average loss: 4.3654 Iteration: 166; Percent complete: 4.2%; Average loss: 4.1337 Iteration: 167; Percent complete: 4.2%; Average loss: 3.6994 Iteration: 168; Percent complete: 4.2%; Average loss: 4.0426 Iteration: 169; Percent complete: 4.2%; Average loss: 4.1509 Iteration: 170; Percent complete: 4.2%; Average loss: 4.1743 Iteration: 171; Percent complete: 4.3%; Average loss: 4.1060 Iteration: 172; Percent complete: 4.3%; Average loss: 3.9821 Iteration: 173; Percent complete: 4.3%; Average loss: 4.2417 Iteration: 174; Percent complete: 4.3%; Average loss: 4.1360 Iteration: 175; Percent complete: 4.4%; Average loss: 4.2177 Iteration: 176; Percent complete: 4.4%; Average loss: 4.2830 Iteration: 177; Percent complete: 4.4%; Average loss: 3.9336 Iteration: 178; Percent complete: 4.5%; Average loss: 4.1213 Iteration: 179; Percent complete: 4.5%; Average loss: 4.1385 Iteration: 180; Percent complete: 4.5%; Average loss: 4.1016 Iteration: 181; Percent complete: 4.5%; Average loss: 4.2591 Iteration: 182; Percent complete: 4.5%; Average loss: 4.0853 Iteration: 183; Percent complete: 4.6%; Average loss: 4.0617 Iteration: 184; Percent complete: 4.6%; Average loss: 4.1011 Iteration: 185; Percent complete: 4.6%; Average loss: 4.1190 Iteration: 186; Percent complete: 4.7%; Average loss: 4.1867 Iteration: 187; Percent complete: 4.7%; Average loss: 3.9196 Iteration: 188; Percent complete: 4.7%; Average loss: 4.1242 Iteration: 189; Percent complete: 4.7%; Average loss: 4.0260 Iteration: 190; Percent complete: 4.8%; Average loss: 4.0863 Iteration: 191; Percent complete: 4.8%; Average loss: 3.9731 Iteration: 192; Percent complete: 4.8%; Average loss: 4.2058 Iteration: 193; Percent complete: 4.8%; Average loss: 3.9721 Iteration: 194; Percent complete: 4.9%; Average loss: 3.9728 Iteration: 195; Percent complete: 4.9%; Average loss: 4.0148 Iteration: 196; Percent complete: 4.9%; Average loss: 3.9714 Iteration: 197; Percent complete: 4.9%; Average loss: 3.9814 Iteration: 198; Percent complete: 5.0%; Average loss: 4.1795 Iteration: 199; Percent complete: 5.0%; Average loss: 4.1221 Iteration: 200; Percent complete: 5.0%; Average loss: 4.1775 Iteration: 201; Percent complete: 5.0%; Average loss: 4.0131 Iteration: 202; Percent complete: 5.1%; Average loss: 4.1245 Iteration: 203; Percent complete: 5.1%; Average loss: 3.8355 Iteration: 204; Percent complete: 5.1%; Average loss: 3.9977 Iteration: 205; Percent complete: 5.1%; Average loss: 4.0701 Iteration: 206; Percent complete: 5.1%; Average loss: 4.0670 Iteration: 207; Percent complete: 5.2%; Average loss: 4.3034 Iteration: 208; Percent complete: 5.2%; Average loss: 4.1358 Iteration: 209; Percent complete: 5.2%; Average loss: 3.9318 Iteration: 210; Percent complete: 5.2%; Average loss: 4.3485 Iteration: 211; Percent complete: 5.3%; Average loss: 4.1000 Iteration: 212; Percent complete: 5.3%; Average loss: 3.8061 Iteration: 213; Percent complete: 5.3%; Average loss: 4.3659 Iteration: 214; Percent complete: 5.3%; Average loss: 4.0696 Iteration: 215; Percent complete: 5.4%; Average loss: 3.9714 Iteration: 216; Percent complete: 5.4%; Average loss: 3.9519 Iteration: 217; Percent complete: 5.4%; Average loss: 3.7777 Iteration: 218; Percent complete: 5.5%; Average loss: 3.9666 Iteration: 219; Percent complete: 5.5%; Average loss: 4.2233 Iteration: 220; Percent complete: 5.5%; Average loss: 4.1972 Iteration: 221; Percent complete: 5.5%; Average loss: 3.7798 Iteration: 222; Percent complete: 5.5%; Average loss: 4.0753 Iteration: 223; Percent complete: 5.6%; Average loss: 4.1683 Iteration: 224; Percent complete: 5.6%; Average loss: 4.0507 Iteration: 225; Percent complete: 5.6%; Average loss: 4.0959 Iteration: 226; Percent complete: 5.7%; Average loss: 3.9856 Iteration: 227; Percent complete: 5.7%; Average loss: 3.9170 Iteration: 228; Percent complete: 5.7%; Average loss: 3.9120 Iteration: 229; Percent complete: 5.7%; Average loss: 4.0228 Iteration: 230; Percent complete: 5.8%; Average loss: 3.9998 Iteration: 231; Percent complete: 5.8%; Average loss: 4.0148 Iteration: 232; Percent complete: 5.8%; Average loss: 3.9868 Iteration: 233; Percent complete: 5.8%; Average loss: 4.0824 Iteration: 234; Percent complete: 5.9%; Average loss: 4.0287 Iteration: 235; Percent complete: 5.9%; Average loss: 4.1464 Iteration: 236; Percent complete: 5.9%; Average loss: 4.1959 Iteration: 237; Percent complete: 5.9%; Average loss: 4.0118 Iteration: 238; Percent complete: 5.9%; Average loss: 3.8623 Iteration: 239; Percent complete: 6.0%; Average loss: 3.9392 Iteration: 240; Percent complete: 6.0%; Average loss: 4.0980 Iteration: 241; Percent complete: 6.0%; Average loss: 3.7864 Iteration: 242; Percent complete: 6.0%; Average loss: 4.0033 Iteration: 243; Percent complete: 6.1%; Average loss: 4.0426 Iteration: 244; Percent complete: 6.1%; Average loss: 4.0289 Iteration: 245; Percent complete: 6.1%; Average loss: 3.8281 Iteration: 246; Percent complete: 6.2%; Average loss: 3.8429 Iteration: 247; Percent complete: 6.2%; Average loss: 4.0397 Iteration: 248; Percent complete: 6.2%; Average loss: 4.0048 Iteration: 249; Percent complete: 6.2%; Average loss: 3.9802 Iteration: 250; Percent complete: 6.2%; Average loss: 4.0890 Iteration: 251; Percent complete: 6.3%; Average loss: 4.1253 Iteration: 252; Percent complete: 6.3%; Average loss: 3.9077 Iteration: 253; Percent complete: 6.3%; Average loss: 4.0899 Iteration: 254; Percent complete: 6.3%; Average loss: 3.9117 Iteration: 255; Percent complete: 6.4%; Average loss: 3.8531 Iteration: 256; Percent complete: 6.4%; Average loss: 4.0132 Iteration: 257; Percent complete: 6.4%; Average loss: 3.8905 Iteration: 258; Percent complete: 6.5%; Average loss: 3.9914 Iteration: 259; Percent complete: 6.5%; Average loss: 3.9237 Iteration: 260; Percent complete: 6.5%; Average loss: 4.0940 Iteration: 261; Percent complete: 6.5%; Average loss: 4.1197 Iteration: 262; Percent complete: 6.6%; Average loss: 4.0826 Iteration: 263; Percent complete: 6.6%; Average loss: 3.7295 Iteration: 264; Percent complete: 6.6%; Average loss: 3.7067 Iteration: 265; Percent complete: 6.6%; Average loss: 4.0793 Iteration: 266; Percent complete: 6.7%; Average loss: 4.1098 Iteration: 267; Percent complete: 6.7%; Average loss: 4.0790 Iteration: 268; Percent complete: 6.7%; Average loss: 3.9289 Iteration: 269; Percent complete: 6.7%; Average loss: 4.1463 Iteration: 270; Percent complete: 6.8%; Average loss: 3.9660 Iteration: 271; Percent complete: 6.8%; Average loss: 4.0395 Iteration: 272; Percent complete: 6.8%; Average loss: 3.9073 Iteration: 273; Percent complete: 6.8%; Average loss: 3.8108 Iteration: 274; Percent complete: 6.9%; Average loss: 3.9883 Iteration: 275; Percent complete: 6.9%; Average loss: 3.8158 Iteration: 276; Percent complete: 6.9%; Average loss: 4.1091 Iteration: 277; Percent complete: 6.9%; Average loss: 3.8334 Iteration: 278; Percent complete: 7.0%; Average loss: 3.9142 Iteration: 279; Percent complete: 7.0%; Average loss: 3.7744 Iteration: 280; Percent complete: 7.0%; Average loss: 4.0622 Iteration: 281; Percent complete: 7.0%; Average loss: 4.0801 Iteration: 282; Percent complete: 7.0%; Average loss: 3.9432 Iteration: 283; Percent complete: 7.1%; Average loss: 3.8516 Iteration: 284; Percent complete: 7.1%; Average loss: 3.7523 Iteration: 285; Percent complete: 7.1%; Average loss: 3.8105 Iteration: 286; Percent complete: 7.1%; Average loss: 3.7396 Iteration: 287; Percent complete: 7.2%; Average loss: 3.9305 Iteration: 288; Percent complete: 7.2%; Average loss: 4.0246 Iteration: 289; Percent complete: 7.2%; Average loss: 3.8524 Iteration: 290; Percent complete: 7.2%; Average loss: 4.3312 Iteration: 291; Percent complete: 7.3%; Average loss: 3.8216 Iteration: 292; Percent complete: 7.3%; Average loss: 4.0757 Iteration: 293; Percent complete: 7.3%; Average loss: 4.0355 Iteration: 294; Percent complete: 7.3%; Average loss: 3.8464 Iteration: 295; Percent complete: 7.4%; Average loss: 3.9838 Iteration: 296; Percent complete: 7.4%; Average loss: 4.0746 Iteration: 297; Percent complete: 7.4%; Average loss: 3.9943 Iteration: 298; Percent complete: 7.4%; Average loss: 3.7201 Iteration: 299; Percent complete: 7.5%; Average loss: 4.0105 Iteration: 300; Percent complete: 7.5%; Average loss: 4.0790 Iteration: 301; Percent complete: 7.5%; Average loss: 3.6644 Iteration: 302; Percent complete: 7.5%; Average loss: 3.5375 Iteration: 303; Percent complete: 7.6%; Average loss: 3.8477 Iteration: 304; Percent complete: 7.6%; Average loss: 3.8803 Iteration: 305; Percent complete: 7.6%; Average loss: 3.9998 Iteration: 306; Percent complete: 7.6%; Average loss: 4.1321 Iteration: 307; Percent complete: 7.7%; Average loss: 3.7684 Iteration: 308; Percent complete: 7.7%; Average loss: 3.8502 Iteration: 309; Percent complete: 7.7%; Average loss: 4.2663 Iteration: 310; Percent complete: 7.8%; Average loss: 3.8803 Iteration: 311; Percent complete: 7.8%; Average loss: 3.9167 Iteration: 312; Percent complete: 7.8%; Average loss: 4.0610 Iteration: 313; Percent complete: 7.8%; Average loss: 3.9011 Iteration: 314; Percent complete: 7.8%; Average loss: 3.8881 Iteration: 315; Percent complete: 7.9%; Average loss: 3.8524 Iteration: 316; Percent complete: 7.9%; Average loss: 3.7997 Iteration: 317; Percent complete: 7.9%; Average loss: 3.8981 Iteration: 318; Percent complete: 8.0%; Average loss: 3.9347 Iteration: 319; Percent complete: 8.0%; Average loss: 3.9208 Iteration: 320; Percent complete: 8.0%; Average loss: 3.9928 Iteration: 321; Percent complete: 8.0%; Average loss: 3.8062 Iteration: 322; Percent complete: 8.1%; Average loss: 3.9427 Iteration: 323; Percent complete: 8.1%; Average loss: 3.8952 Iteration: 324; Percent complete: 8.1%; Average loss: 4.2607 Iteration: 325; Percent complete: 8.1%; Average loss: 3.5975 Iteration: 326; Percent complete: 8.2%; Average loss: 3.8316 Iteration: 327; Percent complete: 8.2%; Average loss: 3.6121 Iteration: 328; Percent complete: 8.2%; Average loss: 3.9947 Iteration: 329; Percent complete: 8.2%; Average loss: 3.8906 Iteration: 330; Percent complete: 8.2%; Average loss: 3.7206 Iteration: 331; Percent complete: 8.3%; Average loss: 3.9678 Iteration: 332; Percent complete: 8.3%; Average loss: 4.1279 Iteration: 333; Percent complete: 8.3%; Average loss: 4.1927 Iteration: 334; Percent complete: 8.3%; Average loss: 3.9407 Iteration: 335; Percent complete: 8.4%; Average loss: 4.0693 Iteration: 336; Percent complete: 8.4%; Average loss: 3.9256 Iteration: 337; Percent complete: 8.4%; Average loss: 3.9546 Iteration: 338; Percent complete: 8.5%; Average loss: 3.9953 Iteration: 339; Percent complete: 8.5%; Average loss: 4.0491 Iteration: 340; Percent complete: 8.5%; Average loss: 3.9486 Iteration: 341; Percent complete: 8.5%; Average loss: 3.8713 Iteration: 342; Percent complete: 8.6%; Average loss: 3.7200 Iteration: 343; Percent complete: 8.6%; Average loss: 4.1009 Iteration: 344; Percent complete: 8.6%; Average loss: 4.0269 Iteration: 345; Percent complete: 8.6%; Average loss: 3.7949 Iteration: 346; Percent complete: 8.6%; Average loss: 3.5482 Iteration: 347; Percent complete: 8.7%; Average loss: 3.7925 Iteration: 348; Percent complete: 8.7%; Average loss: 3.9406 Iteration: 349; Percent complete: 8.7%; Average loss: 3.7231 Iteration: 350; Percent complete: 8.8%; Average loss: 3.7591 Iteration: 351; Percent complete: 8.8%; Average loss: 3.9206 Iteration: 352; Percent complete: 8.8%; Average loss: 3.7704 Iteration: 353; Percent complete: 8.8%; Average loss: 3.7609 Iteration: 354; Percent complete: 8.8%; Average loss: 4.0645 Iteration: 355; Percent complete: 8.9%; Average loss: 3.7719 Iteration: 356; Percent complete: 8.9%; Average loss: 3.5282 Iteration: 357; Percent complete: 8.9%; Average loss: 3.8332 Iteration: 358; Percent complete: 8.9%; Average loss: 4.0784 Iteration: 359; Percent complete: 9.0%; Average loss: 3.8122 Iteration: 360; Percent complete: 9.0%; Average loss: 4.0373 Iteration: 361; Percent complete: 9.0%; Average loss: 3.8860 Iteration: 362; Percent complete: 9.0%; Average loss: 3.9918 Iteration: 363; Percent complete: 9.1%; Average loss: 3.9680 Iteration: 364; Percent complete: 9.1%; Average loss: 3.6134 Iteration: 365; Percent complete: 9.1%; Average loss: 3.6416 Iteration: 366; Percent complete: 9.2%; Average loss: 3.8210 Iteration: 367; Percent complete: 9.2%; Average loss: 3.7122 Iteration: 368; Percent complete: 9.2%; Average loss: 3.9304 Iteration: 369; Percent complete: 9.2%; Average loss: 3.8145 Iteration: 370; Percent complete: 9.2%; Average loss: 3.9310 Iteration: 371; Percent complete: 9.3%; Average loss: 3.7484 Iteration: 372; Percent complete: 9.3%; Average loss: 3.7347 Iteration: 373; Percent complete: 9.3%; Average loss: 3.8649 Iteration: 374; Percent complete: 9.3%; Average loss: 3.7587 Iteration: 375; Percent complete: 9.4%; Average loss: 3.7227 Iteration: 376; Percent complete: 9.4%; Average loss: 3.9845 Iteration: 377; Percent complete: 9.4%; Average loss: 3.5709 Iteration: 378; Percent complete: 9.4%; Average loss: 3.8979 Iteration: 379; Percent complete: 9.5%; Average loss: 3.8935 Iteration: 380; Percent complete: 9.5%; Average loss: 3.7004 Iteration: 381; Percent complete: 9.5%; Average loss: 4.0864 Iteration: 382; Percent complete: 9.6%; Average loss: 3.7459 Iteration: 383; Percent complete: 9.6%; Average loss: 3.8584 Iteration: 384; Percent complete: 9.6%; Average loss: 3.9433 Iteration: 385; Percent complete: 9.6%; Average loss: 3.8552 Iteration: 386; Percent complete: 9.7%; Average loss: 3.8182 Iteration: 387; Percent complete: 9.7%; Average loss: 3.8303 Iteration: 388; Percent complete: 9.7%; Average loss: 3.9497 Iteration: 389; Percent complete: 9.7%; Average loss: 3.4841 Iteration: 390; Percent complete: 9.8%; Average loss: 3.7554 Iteration: 391; Percent complete: 9.8%; Average loss: 3.5799 Iteration: 392; Percent complete: 9.8%; Average loss: 3.8296 Iteration: 393; Percent complete: 9.8%; Average loss: 3.9838 Iteration: 394; Percent complete: 9.8%; Average loss: 3.8117 Iteration: 395; Percent complete: 9.9%; Average loss: 4.0275 Iteration: 396; Percent complete: 9.9%; Average loss: 3.7465 Iteration: 397; Percent complete: 9.9%; Average loss: 3.6529 Iteration: 398; Percent complete: 10.0%; Average loss: 3.7613 Iteration: 399; Percent complete: 10.0%; Average loss: 3.9231 Iteration: 400; Percent complete: 10.0%; Average loss: 3.8706 Iteration: 401; Percent complete: 10.0%; Average loss: 3.7955 Iteration: 402; Percent complete: 10.1%; Average loss: 3.8626 Iteration: 403; Percent complete: 10.1%; Average loss: 3.8675 Iteration: 404; Percent complete: 10.1%; Average loss: 3.8277 Iteration: 405; Percent complete: 10.1%; Average loss: 3.9678 Iteration: 406; Percent complete: 10.2%; Average loss: 3.7293 Iteration: 407; Percent complete: 10.2%; Average loss: 3.8014 Iteration: 408; Percent complete: 10.2%; Average loss: 3.9625 Iteration: 409; Percent complete: 10.2%; Average loss: 3.6216 Iteration: 410; Percent complete: 10.2%; Average loss: 4.0287 Iteration: 411; Percent complete: 10.3%; Average loss: 3.8205 Iteration: 412; Percent complete: 10.3%; Average loss: 3.5382 Iteration: 413; Percent complete: 10.3%; Average loss: 3.5913 Iteration: 414; Percent complete: 10.3%; Average loss: 3.6269 Iteration: 415; Percent complete: 10.4%; Average loss: 3.5479 Iteration: 416; Percent complete: 10.4%; Average loss: 4.0177 Iteration: 417; Percent complete: 10.4%; Average loss: 3.9283 Iteration: 418; Percent complete: 10.4%; Average loss: 3.8434 Iteration: 419; Percent complete: 10.5%; Average loss: 3.7303 Iteration: 420; Percent complete: 10.5%; Average loss: 3.8276 Iteration: 421; Percent complete: 10.5%; Average loss: 3.8505 Iteration: 422; Percent complete: 10.5%; Average loss: 3.9325 Iteration: 423; Percent complete: 10.6%; Average loss: 3.8161 Iteration: 424; Percent complete: 10.6%; Average loss: 3.8055 Iteration: 425; Percent complete: 10.6%; Average loss: 3.8495 Iteration: 426; Percent complete: 10.7%; Average loss: 3.6580 Iteration: 427; Percent complete: 10.7%; Average loss: 3.8575 Iteration: 428; Percent complete: 10.7%; Average loss: 3.9150 Iteration: 429; Percent complete: 10.7%; Average loss: 3.8996 Iteration: 430; Percent complete: 10.8%; Average loss: 3.7417 Iteration: 431; Percent complete: 10.8%; Average loss: 3.7791 Iteration: 432; Percent complete: 10.8%; Average loss: 3.9553 Iteration: 433; Percent complete: 10.8%; Average loss: 3.7064 Iteration: 434; Percent complete: 10.8%; Average loss: 3.9171 Iteration: 435; Percent complete: 10.9%; Average loss: 3.5349 Iteration: 436; Percent complete: 10.9%; Average loss: 3.9773 Iteration: 437; Percent complete: 10.9%; Average loss: 3.9058 Iteration: 438; Percent complete: 10.9%; Average loss: 3.4917 Iteration: 439; Percent complete: 11.0%; Average loss: 3.9654 Iteration: 440; Percent complete: 11.0%; Average loss: 3.5960 Iteration: 441; Percent complete: 11.0%; Average loss: 3.7116 Iteration: 442; Percent complete: 11.1%; Average loss: 3.8931 Iteration: 443; Percent complete: 11.1%; Average loss: 3.8972 Iteration: 444; Percent complete: 11.1%; Average loss: 3.7847 Iteration: 445; Percent complete: 11.1%; Average loss: 4.0661 Iteration: 446; Percent complete: 11.2%; Average loss: 3.8097 Iteration: 447; Percent complete: 11.2%; Average loss: 3.7401 Iteration: 448; Percent complete: 11.2%; Average loss: 3.5483 Iteration: 449; Percent complete: 11.2%; Average loss: 3.7926 Iteration: 450; Percent complete: 11.2%; Average loss: 3.8901 Iteration: 451; Percent complete: 11.3%; Average loss: 3.7449 Iteration: 452; Percent complete: 11.3%; Average loss: 3.8025 Iteration: 453; Percent complete: 11.3%; Average loss: 3.9462 Iteration: 454; Percent complete: 11.3%; Average loss: 3.8498 Iteration: 455; Percent complete: 11.4%; Average loss: 3.5509 Iteration: 456; Percent complete: 11.4%; Average loss: 4.0444 Iteration: 457; Percent complete: 11.4%; Average loss: 3.5974 Iteration: 458; Percent complete: 11.5%; Average loss: 4.0256 Iteration: 459; Percent complete: 11.5%; Average loss: 3.8341 Iteration: 460; Percent complete: 11.5%; Average loss: 3.7374 Iteration: 461; Percent complete: 11.5%; Average loss: 3.6563 Iteration: 462; Percent complete: 11.6%; Average loss: 3.7481 Iteration: 463; Percent complete: 11.6%; Average loss: 3.9398 Iteration: 464; Percent complete: 11.6%; Average loss: 4.0101 Iteration: 465; Percent complete: 11.6%; Average loss: 3.5763 Iteration: 466; Percent complete: 11.7%; Average loss: 3.8159 Iteration: 467; Percent complete: 11.7%; Average loss: 3.7178 Iteration: 468; Percent complete: 11.7%; Average loss: 3.7498 Iteration: 469; Percent complete: 11.7%; Average loss: 3.8979 Iteration: 470; Percent complete: 11.8%; Average loss: 4.1197 Iteration: 471; Percent complete: 11.8%; Average loss: 3.7923 Iteration: 472; Percent complete: 11.8%; Average loss: 3.5863 Iteration: 473; Percent complete: 11.8%; Average loss: 3.9383 Iteration: 474; Percent complete: 11.8%; Average loss: 3.8950 Iteration: 475; Percent complete: 11.9%; Average loss: 3.7254 Iteration: 476; Percent complete: 11.9%; Average loss: 3.8031 Iteration: 477; Percent complete: 11.9%; Average loss: 3.5229 Iteration: 478; Percent complete: 11.9%; Average loss: 3.8929 Iteration: 479; Percent complete: 12.0%; Average loss: 3.6741 Iteration: 480; Percent complete: 12.0%; Average loss: 3.8728 Iteration: 481; Percent complete: 12.0%; Average loss: 3.6670 Iteration: 482; Percent complete: 12.0%; Average loss: 4.0406 Iteration: 483; Percent complete: 12.1%; Average loss: 3.8302 Iteration: 484; Percent complete: 12.1%; Average loss: 3.5537 Iteration: 485; Percent complete: 12.1%; Average loss: 3.6907 Iteration: 486; Percent complete: 12.2%; Average loss: 3.8578 Iteration: 487; Percent complete: 12.2%; Average loss: 3.9619 Iteration: 488; Percent complete: 12.2%; Average loss: 3.5723 Iteration: 489; Percent complete: 12.2%; Average loss: 3.7259 Iteration: 490; Percent complete: 12.2%; Average loss: 3.9127 Iteration: 491; Percent complete: 12.3%; Average loss: 3.7121 Iteration: 492; Percent complete: 12.3%; Average loss: 3.8162 Iteration: 493; Percent complete: 12.3%; Average loss: 3.6824 Iteration: 494; Percent complete: 12.3%; Average loss: 3.8309 Iteration: 495; Percent complete: 12.4%; Average loss: 3.6757 Iteration: 496; Percent complete: 12.4%; Average loss: 3.4496 Iteration: 497; Percent complete: 12.4%; Average loss: 3.5690 Iteration: 498; Percent complete: 12.4%; Average loss: 3.8157 Iteration: 499; Percent complete: 12.5%; Average loss: 3.8934 Iteration: 500; Percent complete: 12.5%; Average loss: 3.6982 Iteration: 501; Percent complete: 12.5%; Average loss: 3.5557 Iteration: 502; Percent complete: 12.6%; Average loss: 3.8134 Iteration: 503; Percent complete: 12.6%; Average loss: 3.4221 Iteration: 504; Percent complete: 12.6%; Average loss: 3.7985 Iteration: 505; Percent complete: 12.6%; Average loss: 3.5129 Iteration: 506; Percent complete: 12.7%; Average loss: 3.8028 Iteration: 507; Percent complete: 12.7%; Average loss: 3.7066 Iteration: 508; Percent complete: 12.7%; Average loss: 3.6812 Iteration: 509; Percent complete: 12.7%; Average loss: 3.9074 Iteration: 510; Percent complete: 12.8%; Average loss: 3.7411 Iteration: 511; Percent complete: 12.8%; Average loss: 3.7284 Iteration: 512; Percent complete: 12.8%; Average loss: 3.5814 Iteration: 513; Percent complete: 12.8%; Average loss: 3.8520 Iteration: 514; Percent complete: 12.8%; Average loss: 3.7166 Iteration: 515; Percent complete: 12.9%; Average loss: 3.6747 Iteration: 516; Percent complete: 12.9%; Average loss: 3.5237 Iteration: 517; Percent complete: 12.9%; Average loss: 3.5360 Iteration: 518; Percent complete: 13.0%; Average loss: 3.6724 Iteration: 519; Percent complete: 13.0%; Average loss: 3.8652 Iteration: 520; Percent complete: 13.0%; Average loss: 3.8225 Iteration: 521; Percent complete: 13.0%; Average loss: 3.8017 Iteration: 522; Percent complete: 13.1%; Average loss: 3.7132 Iteration: 523; Percent complete: 13.1%; Average loss: 3.5581 Iteration: 524; Percent complete: 13.1%; Average loss: 3.8739 Iteration: 525; Percent complete: 13.1%; Average loss: 3.5949 Iteration: 526; Percent complete: 13.2%; Average loss: 3.8910 Iteration: 527; Percent complete: 13.2%; Average loss: 3.6183 Iteration: 528; Percent complete: 13.2%; Average loss: 3.5530 Iteration: 529; Percent complete: 13.2%; Average loss: 3.3625 Iteration: 530; Percent complete: 13.2%; Average loss: 3.6401 Iteration: 531; Percent complete: 13.3%; Average loss: 3.8171 Iteration: 532; Percent complete: 13.3%; Average loss: 3.7257 Iteration: 533; Percent complete: 13.3%; Average loss: 3.8539 Iteration: 534; Percent complete: 13.4%; Average loss: 3.7451 Iteration: 535; Percent complete: 13.4%; Average loss: 3.7637 Iteration: 536; Percent complete: 13.4%; Average loss: 3.3796 Iteration: 537; Percent complete: 13.4%; Average loss: 4.0703 Iteration: 538; Percent complete: 13.5%; Average loss: 3.8479 Iteration: 539; Percent complete: 13.5%; Average loss: 3.4562 Iteration: 540; Percent complete: 13.5%; Average loss: 3.7717 Iteration: 541; Percent complete: 13.5%; Average loss: 3.7462 Iteration: 542; Percent complete: 13.6%; Average loss: 3.5740 Iteration: 543; Percent complete: 13.6%; Average loss: 3.7166 Iteration: 544; Percent complete: 13.6%; Average loss: 3.8405 Iteration: 545; Percent complete: 13.6%; Average loss: 3.8456 Iteration: 546; Percent complete: 13.7%; Average loss: 3.3585 Iteration: 547; Percent complete: 13.7%; Average loss: 3.6039 Iteration: 548; Percent complete: 13.7%; Average loss: 3.9468 Iteration: 549; Percent complete: 13.7%; Average loss: 3.9788 Iteration: 550; Percent complete: 13.8%; Average loss: 3.7406 Iteration: 551; Percent complete: 13.8%; Average loss: 3.6961 Iteration: 552; Percent complete: 13.8%; Average loss: 3.7234 Iteration: 553; Percent complete: 13.8%; Average loss: 3.6702 Iteration: 554; Percent complete: 13.9%; Average loss: 3.8438 Iteration: 555; Percent complete: 13.9%; Average loss: 3.7605 Iteration: 556; Percent complete: 13.9%; Average loss: 3.6626 Iteration: 557; Percent complete: 13.9%; Average loss: 3.5273 Iteration: 558; Percent complete: 14.0%; Average loss: 3.7861 Iteration: 559; Percent complete: 14.0%; Average loss: 3.6199 Iteration: 560; Percent complete: 14.0%; Average loss: 3.7754 Iteration: 561; Percent complete: 14.0%; Average loss: 3.4824 Iteration: 562; Percent complete: 14.1%; Average loss: 3.7666 Iteration: 563; Percent complete: 14.1%; Average loss: 3.7333 Iteration: 564; Percent complete: 14.1%; Average loss: 3.6684 Iteration: 565; Percent complete: 14.1%; Average loss: 3.4400 Iteration: 566; Percent complete: 14.1%; Average loss: 3.6076 Iteration: 567; Percent complete: 14.2%; Average loss: 3.4881 Iteration: 568; Percent complete: 14.2%; Average loss: 3.5095 Iteration: 569; Percent complete: 14.2%; Average loss: 3.6141 Iteration: 570; Percent complete: 14.2%; Average loss: 3.5847 Iteration: 571; Percent complete: 14.3%; Average loss: 3.6744 Iteration: 572; Percent complete: 14.3%; Average loss: 3.7649 Iteration: 573; Percent complete: 14.3%; Average loss: 3.6208 Iteration: 574; Percent complete: 14.3%; Average loss: 3.5347 Iteration: 575; Percent complete: 14.4%; Average loss: 3.6129 Iteration: 576; Percent complete: 14.4%; Average loss: 3.8680 Iteration: 577; Percent complete: 14.4%; Average loss: 3.4927 Iteration: 578; Percent complete: 14.4%; Average loss: 3.7431 Iteration: 579; Percent complete: 14.5%; Average loss: 3.5518 Iteration: 580; Percent complete: 14.5%; Average loss: 3.6418 Iteration: 581; Percent complete: 14.5%; Average loss: 3.8487 Iteration: 582; Percent complete: 14.5%; Average loss: 3.6877 Iteration: 583; Percent complete: 14.6%; Average loss: 3.7739 Iteration: 584; Percent complete: 14.6%; Average loss: 3.8222 Iteration: 585; Percent complete: 14.6%; Average loss: 3.5441 Iteration: 586; Percent complete: 14.6%; Average loss: 3.8848 Iteration: 587; Percent complete: 14.7%; Average loss: 3.6943 Iteration: 588; Percent complete: 14.7%; Average loss: 3.6809 Iteration: 589; Percent complete: 14.7%; Average loss: 3.4659 Iteration: 590; Percent complete: 14.8%; Average loss: 3.8981 Iteration: 591; Percent complete: 14.8%; Average loss: 3.7984 Iteration: 592; Percent complete: 14.8%; Average loss: 3.5944 Iteration: 593; Percent complete: 14.8%; Average loss: 3.4398 Iteration: 594; Percent complete: 14.8%; Average loss: 3.8694 Iteration: 595; Percent complete: 14.9%; Average loss: 3.8858 Iteration: 596; Percent complete: 14.9%; Average loss: 3.5235 Iteration: 597; Percent complete: 14.9%; Average loss: 3.5887 Iteration: 598; Percent complete: 14.9%; Average loss: 3.7172 Iteration: 599; Percent complete: 15.0%; Average loss: 3.5556 Iteration: 600; Percent complete: 15.0%; Average loss: 3.5969 Iteration: 601; Percent complete: 15.0%; Average loss: 3.6409 Iteration: 602; Percent complete: 15.0%; Average loss: 3.5921 Iteration: 603; Percent complete: 15.1%; Average loss: 3.7508 Iteration: 604; Percent complete: 15.1%; Average loss: 3.4679 Iteration: 605; Percent complete: 15.1%; Average loss: 3.8304 Iteration: 606; Percent complete: 15.2%; Average loss: 3.5495 Iteration: 607; Percent complete: 15.2%; Average loss: 3.6530 Iteration: 608; Percent complete: 15.2%; Average loss: 3.2914 Iteration: 609; Percent complete: 15.2%; Average loss: 3.7236 Iteration: 610; Percent complete: 15.2%; Average loss: 3.3462 Iteration: 611; Percent complete: 15.3%; Average loss: 3.5492 Iteration: 612; Percent complete: 15.3%; Average loss: 3.7970 Iteration: 613; Percent complete: 15.3%; Average loss: 3.6886 Iteration: 614; Percent complete: 15.3%; Average loss: 3.6163 Iteration: 615; Percent complete: 15.4%; Average loss: 3.4812 Iteration: 616; Percent complete: 15.4%; Average loss: 3.7545 Iteration: 617; Percent complete: 15.4%; Average loss: 3.8510 Iteration: 618; Percent complete: 15.4%; Average loss: 3.7286 Iteration: 619; Percent complete: 15.5%; Average loss: 3.9117 Iteration: 620; Percent complete: 15.5%; Average loss: 3.5802 Iteration: 621; Percent complete: 15.5%; Average loss: 3.6546 Iteration: 622; Percent complete: 15.6%; Average loss: 3.6232 Iteration: 623; Percent complete: 15.6%; Average loss: 3.7236 Iteration: 624; Percent complete: 15.6%; Average loss: 3.6840 Iteration: 625; Percent complete: 15.6%; Average loss: 3.7379 Iteration: 626; Percent complete: 15.7%; Average loss: 3.7581 Iteration: 627; Percent complete: 15.7%; Average loss: 3.5969 Iteration: 628; Percent complete: 15.7%; Average loss: 3.6942 Iteration: 629; Percent complete: 15.7%; Average loss: 3.7410 Iteration: 630; Percent complete: 15.8%; Average loss: 3.7163 Iteration: 631; Percent complete: 15.8%; Average loss: 3.8666 Iteration: 632; Percent complete: 15.8%; Average loss: 3.6211 Iteration: 633; Percent complete: 15.8%; Average loss: 3.7418 Iteration: 634; Percent complete: 15.8%; Average loss: 3.7647 Iteration: 635; Percent complete: 15.9%; Average loss: 3.7752 Iteration: 636; Percent complete: 15.9%; Average loss: 4.0558 Iteration: 637; Percent complete: 15.9%; Average loss: 3.4443 Iteration: 638; Percent complete: 16.0%; Average loss: 3.5716 Iteration: 639; Percent complete: 16.0%; Average loss: 3.7721 Iteration: 640; Percent complete: 16.0%; Average loss: 3.5680 Iteration: 641; Percent complete: 16.0%; Average loss: 3.3961 Iteration: 642; Percent complete: 16.1%; Average loss: 3.7067 Iteration: 643; Percent complete: 16.1%; Average loss: 3.9103 Iteration: 644; Percent complete: 16.1%; Average loss: 3.4942 Iteration: 645; Percent complete: 16.1%; Average loss: 3.5821 Iteration: 646; Percent complete: 16.2%; Average loss: 3.4953 Iteration: 647; Percent complete: 16.2%; Average loss: 3.8689 Iteration: 648; Percent complete: 16.2%; Average loss: 3.5246 Iteration: 649; Percent complete: 16.2%; Average loss: 3.2917 Iteration: 650; Percent complete: 16.2%; Average loss: 3.6548 Iteration: 651; Percent complete: 16.3%; Average loss: 3.6687 Iteration: 652; Percent complete: 16.3%; Average loss: 3.6025 Iteration: 653; Percent complete: 16.3%; Average loss: 3.6235 Iteration: 654; Percent complete: 16.4%; Average loss: 4.0350 Iteration: 655; Percent complete: 16.4%; Average loss: 3.5709 Iteration: 656; Percent complete: 16.4%; Average loss: 3.7000 Iteration: 657; Percent complete: 16.4%; Average loss: 3.8631 Iteration: 658; Percent complete: 16.4%; Average loss: 3.3913 Iteration: 659; Percent complete: 16.5%; Average loss: 3.3962 Iteration: 660; Percent complete: 16.5%; Average loss: 3.6423 Iteration: 661; Percent complete: 16.5%; Average loss: 3.9220 Iteration: 662; Percent complete: 16.6%; Average loss: 3.5877 Iteration: 663; Percent complete: 16.6%; Average loss: 3.6904 Iteration: 664; Percent complete: 16.6%; Average loss: 3.5557 Iteration: 665; Percent complete: 16.6%; Average loss: 3.4675 Iteration: 666; Percent complete: 16.7%; Average loss: 3.6781 Iteration: 667; Percent complete: 16.7%; Average loss: 3.5041 Iteration: 668; Percent complete: 16.7%; Average loss: 3.7992 Iteration: 669; Percent complete: 16.7%; Average loss: 3.7374 Iteration: 670; Percent complete: 16.8%; Average loss: 3.5123 Iteration: 671; Percent complete: 16.8%; Average loss: 3.3445 Iteration: 672; Percent complete: 16.8%; Average loss: 3.8337 Iteration: 673; Percent complete: 16.8%; Average loss: 3.4734 Iteration: 674; Percent complete: 16.9%; Average loss: 3.3143 Iteration: 675; Percent complete: 16.9%; Average loss: 3.7910 Iteration: 676; Percent complete: 16.9%; Average loss: 3.7199 Iteration: 677; Percent complete: 16.9%; Average loss: 3.8268 Iteration: 678; Percent complete: 17.0%; Average loss: 3.8511 Iteration: 679; Percent complete: 17.0%; Average loss: 3.5963 Iteration: 680; Percent complete: 17.0%; Average loss: 3.6471 Iteration: 681; Percent complete: 17.0%; Average loss: 3.3824 Iteration: 682; Percent complete: 17.1%; Average loss: 3.5659 Iteration: 683; Percent complete: 17.1%; Average loss: 3.6350 Iteration: 684; Percent complete: 17.1%; Average loss: 3.8994 Iteration: 685; Percent complete: 17.1%; Average loss: 3.7863 Iteration: 686; Percent complete: 17.2%; Average loss: 3.5301 Iteration: 687; Percent complete: 17.2%; Average loss: 3.5225 Iteration: 688; Percent complete: 17.2%; Average loss: 3.6734 Iteration: 689; Percent complete: 17.2%; Average loss: 3.7263 Iteration: 690; Percent complete: 17.2%; Average loss: 3.4707 Iteration: 691; Percent complete: 17.3%; Average loss: 3.7581 Iteration: 692; Percent complete: 17.3%; Average loss: 3.5632 Iteration: 693; Percent complete: 17.3%; Average loss: 3.6536 Iteration: 694; Percent complete: 17.3%; Average loss: 3.8019 Iteration: 695; Percent complete: 17.4%; Average loss: 3.4389 Iteration: 696; Percent complete: 17.4%; Average loss: 3.7376 Iteration: 697; Percent complete: 17.4%; Average loss: 3.6979 Iteration: 698; Percent complete: 17.4%; Average loss: 3.3492 Iteration: 699; Percent complete: 17.5%; Average loss: 3.4348 Iteration: 700; Percent complete: 17.5%; Average loss: 3.8069 Iteration: 701; Percent complete: 17.5%; Average loss: 3.3802 Iteration: 702; Percent complete: 17.5%; Average loss: 3.8568 Iteration: 703; Percent complete: 17.6%; Average loss: 3.5054 Iteration: 704; Percent complete: 17.6%; Average loss: 3.3263 Iteration: 705; Percent complete: 17.6%; Average loss: 3.5891 Iteration: 706; Percent complete: 17.6%; Average loss: 3.6260 Iteration: 707; Percent complete: 17.7%; Average loss: 3.5784 Iteration: 708; Percent complete: 17.7%; Average loss: 3.4783 Iteration: 709; Percent complete: 17.7%; Average loss: 3.5104 Iteration: 710; Percent complete: 17.8%; Average loss: 3.6363 Iteration: 711; Percent complete: 17.8%; Average loss: 3.6434 Iteration: 712; Percent complete: 17.8%; Average loss: 3.6038 Iteration: 713; Percent complete: 17.8%; Average loss: 3.3459 Iteration: 714; Percent complete: 17.8%; Average loss: 3.4378 Iteration: 715; Percent complete: 17.9%; Average loss: 3.5466 Iteration: 716; Percent complete: 17.9%; Average loss: 3.5165 Iteration: 717; Percent complete: 17.9%; Average loss: 3.6127 Iteration: 718; Percent complete: 17.9%; Average loss: 3.5865 Iteration: 719; Percent complete: 18.0%; Average loss: 3.6845 Iteration: 720; Percent complete: 18.0%; Average loss: 3.4936 Iteration: 721; Percent complete: 18.0%; Average loss: 3.5783 Iteration: 722; Percent complete: 18.1%; Average loss: 3.7362 Iteration: 723; Percent complete: 18.1%; Average loss: 3.5251 Iteration: 724; Percent complete: 18.1%; Average loss: 3.6151 Iteration: 725; Percent complete: 18.1%; Average loss: 3.7503 Iteration: 726; Percent complete: 18.1%; Average loss: 3.5095 Iteration: 727; Percent complete: 18.2%; Average loss: 3.4311 Iteration: 728; Percent complete: 18.2%; Average loss: 3.7665 Iteration: 729; Percent complete: 18.2%; Average loss: 3.5126 Iteration: 730; Percent complete: 18.2%; Average loss: 3.6882 Iteration: 731; Percent complete: 18.3%; Average loss: 3.5738 Iteration: 732; Percent complete: 18.3%; Average loss: 3.3661 Iteration: 733; Percent complete: 18.3%; Average loss: 3.6829 Iteration: 734; Percent complete: 18.4%; Average loss: 3.8206 Iteration: 735; Percent complete: 18.4%; Average loss: 3.6282 Iteration: 736; Percent complete: 18.4%; Average loss: 3.4913 Iteration: 737; Percent complete: 18.4%; Average loss: 3.4698 Iteration: 738; Percent complete: 18.4%; Average loss: 3.6335 Iteration: 739; Percent complete: 18.5%; Average loss: 3.7723 Iteration: 740; Percent complete: 18.5%; Average loss: 3.6654 Iteration: 741; Percent complete: 18.5%; Average loss: 3.4295 Iteration: 742; Percent complete: 18.6%; Average loss: 3.5816 Iteration: 743; Percent complete: 18.6%; Average loss: 3.7210 Iteration: 744; Percent complete: 18.6%; Average loss: 3.7401 Iteration: 745; Percent complete: 18.6%; Average loss: 3.8492 Iteration: 746; Percent complete: 18.6%; Average loss: 3.5626 Iteration: 747; Percent complete: 18.7%; Average loss: 3.5762 Iteration: 748; Percent complete: 18.7%; Average loss: 3.6391 Iteration: 749; Percent complete: 18.7%; Average loss: 3.2749 Iteration: 750; Percent complete: 18.8%; Average loss: 3.7079 Iteration: 751; Percent complete: 18.8%; Average loss: 3.4750 Iteration: 752; Percent complete: 18.8%; Average loss: 3.3368 Iteration: 753; Percent complete: 18.8%; Average loss: 3.6616 Iteration: 754; Percent complete: 18.9%; Average loss: 3.5904 Iteration: 755; Percent complete: 18.9%; Average loss: 3.7513 Iteration: 756; Percent complete: 18.9%; Average loss: 3.6647 Iteration: 757; Percent complete: 18.9%; Average loss: 3.6753 Iteration: 758; Percent complete: 18.9%; Average loss: 3.6489 Iteration: 759; Percent complete: 19.0%; Average loss: 3.2884 Iteration: 760; Percent complete: 19.0%; Average loss: 3.6275 Iteration: 761; Percent complete: 19.0%; Average loss: 3.3144 Iteration: 762; Percent complete: 19.1%; Average loss: 3.4885 Iteration: 763; Percent complete: 19.1%; Average loss: 3.6487 Iteration: 764; Percent complete: 19.1%; Average loss: 3.4233 Iteration: 765; Percent complete: 19.1%; Average loss: 3.4108 Iteration: 766; Percent complete: 19.1%; Average loss: 3.6402 Iteration: 767; Percent complete: 19.2%; Average loss: 3.3206 Iteration: 768; Percent complete: 19.2%; Average loss: 3.4857 Iteration: 769; Percent complete: 19.2%; Average loss: 4.0125 Iteration: 770; Percent complete: 19.2%; Average loss: 3.4859 Iteration: 771; Percent complete: 19.3%; Average loss: 3.3784 Iteration: 772; Percent complete: 19.3%; Average loss: 3.4524 Iteration: 773; Percent complete: 19.3%; Average loss: 3.4905 Iteration: 774; Percent complete: 19.4%; Average loss: 3.4831 Iteration: 775; Percent complete: 19.4%; Average loss: 3.5215 Iteration: 776; Percent complete: 19.4%; Average loss: 3.6794 Iteration: 777; Percent complete: 19.4%; Average loss: 3.7826 Iteration: 778; Percent complete: 19.4%; Average loss: 3.6431 Iteration: 779; Percent complete: 19.5%; Average loss: 3.5816 Iteration: 780; Percent complete: 19.5%; Average loss: 3.5224 Iteration: 781; Percent complete: 19.5%; Average loss: 3.5562 Iteration: 782; Percent complete: 19.6%; Average loss: 3.5237 Iteration: 783; Percent complete: 19.6%; Average loss: 3.5845 Iteration: 784; Percent complete: 19.6%; Average loss: 3.5701 Iteration: 785; Percent complete: 19.6%; Average loss: 3.5908 Iteration: 786; Percent complete: 19.7%; Average loss: 3.5212 Iteration: 787; Percent complete: 19.7%; Average loss: 3.4998 Iteration: 788; Percent complete: 19.7%; Average loss: 3.4380 Iteration: 789; Percent complete: 19.7%; Average loss: 3.6353 Iteration: 790; Percent complete: 19.8%; Average loss: 3.6657 Iteration: 791; Percent complete: 19.8%; Average loss: 3.5852 Iteration: 792; Percent complete: 19.8%; Average loss: 3.6004 Iteration: 793; Percent complete: 19.8%; Average loss: 3.8207 Iteration: 794; Percent complete: 19.9%; Average loss: 3.5627 Iteration: 795; Percent complete: 19.9%; Average loss: 3.4713 Iteration: 796; Percent complete: 19.9%; Average loss: 3.7232 Iteration: 797; Percent complete: 19.9%; Average loss: 3.6651 Iteration: 798; Percent complete: 20.0%; Average loss: 3.5413 Iteration: 799; Percent complete: 20.0%; Average loss: 3.4742 Iteration: 800; Percent complete: 20.0%; Average loss: 3.6977 Iteration: 801; Percent complete: 20.0%; Average loss: 3.6542 Iteration: 802; Percent complete: 20.1%; Average loss: 3.3974 Iteration: 803; Percent complete: 20.1%; Average loss: 3.7321 Iteration: 804; Percent complete: 20.1%; Average loss: 3.5593 Iteration: 805; Percent complete: 20.1%; Average loss: 3.5456 Iteration: 806; Percent complete: 20.2%; Average loss: 3.5109 Iteration: 807; Percent complete: 20.2%; Average loss: 3.4801 Iteration: 808; Percent complete: 20.2%; Average loss: 3.5380 Iteration: 809; Percent complete: 20.2%; Average loss: 3.7533 Iteration: 810; Percent complete: 20.2%; Average loss: 3.4751 Iteration: 811; Percent complete: 20.3%; Average loss: 3.3112 Iteration: 812; Percent complete: 20.3%; Average loss: 3.6582 Iteration: 813; Percent complete: 20.3%; Average loss: 3.5319 Iteration: 814; Percent complete: 20.3%; Average loss: 3.3952 Iteration: 815; Percent complete: 20.4%; Average loss: 3.6075 Iteration: 816; Percent complete: 20.4%; Average loss: 3.2658 Iteration: 817; Percent complete: 20.4%; Average loss: 3.7039 Iteration: 818; Percent complete: 20.4%; Average loss: 3.7202 Iteration: 819; Percent complete: 20.5%; Average loss: 3.7275 Iteration: 820; Percent complete: 20.5%; Average loss: 3.5719 Iteration: 821; Percent complete: 20.5%; Average loss: 3.3644 Iteration: 822; Percent complete: 20.5%; Average loss: 3.5862 Iteration: 823; Percent complete: 20.6%; Average loss: 3.4894 Iteration: 824; Percent complete: 20.6%; Average loss: 3.5644 Iteration: 825; Percent complete: 20.6%; Average loss: 3.5211 Iteration: 826; Percent complete: 20.6%; Average loss: 3.5368 Iteration: 827; Percent complete: 20.7%; Average loss: 3.5165 Iteration: 828; Percent complete: 20.7%; Average loss: 3.3837 Iteration: 829; Percent complete: 20.7%; Average loss: 3.4417 Iteration: 830; Percent complete: 20.8%; Average loss: 3.5169 Iteration: 831; Percent complete: 20.8%; Average loss: 3.8254 Iteration: 832; Percent complete: 20.8%; Average loss: 3.2279 Iteration: 833; Percent complete: 20.8%; Average loss: 3.5960 Iteration: 834; Percent complete: 20.8%; Average loss: 3.4965 Iteration: 835; Percent complete: 20.9%; Average loss: 3.3487 Iteration: 836; Percent complete: 20.9%; Average loss: 3.6579 Iteration: 837; Percent complete: 20.9%; Average loss: 3.4036 Iteration: 838; Percent complete: 20.9%; Average loss: 3.4237 Iteration: 839; Percent complete: 21.0%; Average loss: 3.6657 Iteration: 840; Percent complete: 21.0%; Average loss: 3.4636 Iteration: 841; Percent complete: 21.0%; Average loss: 3.5121 Iteration: 842; Percent complete: 21.1%; Average loss: 3.1211 Iteration: 843; Percent complete: 21.1%; Average loss: 3.4421 Iteration: 844; Percent complete: 21.1%; Average loss: 3.5353 Iteration: 845; Percent complete: 21.1%; Average loss: 3.3088 Iteration: 846; Percent complete: 21.1%; Average loss: 3.8358 Iteration: 847; Percent complete: 21.2%; Average loss: 3.4011 Iteration: 848; Percent complete: 21.2%; Average loss: 3.4014 Iteration: 849; Percent complete: 21.2%; Average loss: 3.5993 Iteration: 850; Percent complete: 21.2%; Average loss: 3.4288 Iteration: 851; Percent complete: 21.3%; Average loss: 3.3639 Iteration: 852; Percent complete: 21.3%; Average loss: 3.5965 Iteration: 853; Percent complete: 21.3%; Average loss: 3.5522 Iteration: 854; Percent complete: 21.3%; Average loss: 3.5140 Iteration: 855; Percent complete: 21.4%; Average loss: 3.5798 Iteration: 856; Percent complete: 21.4%; Average loss: 3.2278 Iteration: 857; Percent complete: 21.4%; Average loss: 3.6690 Iteration: 858; Percent complete: 21.4%; Average loss: 3.2040 Iteration: 859; Percent complete: 21.5%; Average loss: 3.5576 Iteration: 860; Percent complete: 21.5%; Average loss: 3.5832 Iteration: 861; Percent complete: 21.5%; Average loss: 3.3533 Iteration: 862; Percent complete: 21.6%; Average loss: 3.5866 Iteration: 863; Percent complete: 21.6%; Average loss: 3.3893 Iteration: 864; Percent complete: 21.6%; Average loss: 3.7515 Iteration: 865; Percent complete: 21.6%; Average loss: 3.3950 Iteration: 866; Percent complete: 21.6%; Average loss: 3.4646 Iteration: 867; Percent complete: 21.7%; Average loss: 3.3127 Iteration: 868; Percent complete: 21.7%; Average loss: 3.5463 Iteration: 869; Percent complete: 21.7%; Average loss: 3.4566 Iteration: 870; Percent complete: 21.8%; Average loss: 3.5425 Iteration: 871; Percent complete: 21.8%; Average loss: 3.5997 Iteration: 872; Percent complete: 21.8%; Average loss: 3.7244 Iteration: 873; Percent complete: 21.8%; Average loss: 3.8304 Iteration: 874; Percent complete: 21.9%; Average loss: 3.3011 Iteration: 875; Percent complete: 21.9%; Average loss: 3.3885 Iteration: 876; Percent complete: 21.9%; Average loss: 3.3959 Iteration: 877; Percent complete: 21.9%; Average loss: 3.2510 Iteration: 878; Percent complete: 21.9%; Average loss: 3.6748 Iteration: 879; Percent complete: 22.0%; Average loss: 3.5273 Iteration: 880; Percent complete: 22.0%; Average loss: 3.5568 Iteration: 881; Percent complete: 22.0%; Average loss: 3.5709 Iteration: 882; Percent complete: 22.1%; Average loss: 3.6435 Iteration: 883; Percent complete: 22.1%; Average loss: 3.7432 Iteration: 884; Percent complete: 22.1%; Average loss: 3.5755 Iteration: 885; Percent complete: 22.1%; Average loss: 3.3584 Iteration: 886; Percent complete: 22.1%; Average loss: 3.1779 Iteration: 887; Percent complete: 22.2%; Average loss: 3.6139 Iteration: 888; Percent complete: 22.2%; Average loss: 3.2257 Iteration: 889; Percent complete: 22.2%; Average loss: 3.4277 Iteration: 890; Percent complete: 22.2%; Average loss: 3.7452 Iteration: 891; Percent complete: 22.3%; Average loss: 3.3525 Iteration: 892; Percent complete: 22.3%; Average loss: 3.3114 Iteration: 893; Percent complete: 22.3%; Average loss: 3.4634 Iteration: 894; Percent complete: 22.4%; Average loss: 3.5672 Iteration: 895; Percent complete: 22.4%; Average loss: 3.4879 Iteration: 896; Percent complete: 22.4%; Average loss: 3.6928 Iteration: 897; Percent complete: 22.4%; Average loss: 3.2904 Iteration: 898; Percent complete: 22.4%; Average loss: 3.6577 Iteration: 899; Percent complete: 22.5%; Average loss: 3.6176 Iteration: 900; Percent complete: 22.5%; Average loss: 3.4937 Iteration: 901; Percent complete: 22.5%; Average loss: 3.4786 Iteration: 902; Percent complete: 22.6%; Average loss: 3.5239 Iteration: 903; Percent complete: 22.6%; Average loss: 3.5125 Iteration: 904; Percent complete: 22.6%; Average loss: 3.3510 Iteration: 905; Percent complete: 22.6%; Average loss: 3.6974 Iteration: 906; Percent complete: 22.7%; Average loss: 3.5007 Iteration: 907; Percent complete: 22.7%; Average loss: 3.5081 Iteration: 908; Percent complete: 22.7%; Average loss: 3.6321 Iteration: 909; Percent complete: 22.7%; Average loss: 3.4719 Iteration: 910; Percent complete: 22.8%; Average loss: 3.4290 Iteration: 911; Percent complete: 22.8%; Average loss: 3.2784 Iteration: 912; Percent complete: 22.8%; Average loss: 3.5913 Iteration: 913; Percent complete: 22.8%; Average loss: 3.3599 Iteration: 914; Percent complete: 22.9%; Average loss: 3.4508 Iteration: 915; Percent complete: 22.9%; Average loss: 3.7315 Iteration: 916; Percent complete: 22.9%; Average loss: 3.2293 Iteration: 917; Percent complete: 22.9%; Average loss: 3.3980 Iteration: 918; Percent complete: 22.9%; Average loss: 3.5799 Iteration: 919; Percent complete: 23.0%; Average loss: 3.3793 Iteration: 920; Percent complete: 23.0%; Average loss: 3.6583 Iteration: 921; Percent complete: 23.0%; Average loss: 3.5891 Iteration: 922; Percent complete: 23.1%; Average loss: 3.6523 Iteration: 923; Percent complete: 23.1%; Average loss: 3.2001 Iteration: 924; Percent complete: 23.1%; Average loss: 3.7877 Iteration: 925; Percent complete: 23.1%; Average loss: 3.3543 Iteration: 926; Percent complete: 23.2%; Average loss: 3.3847 Iteration: 927; Percent complete: 23.2%; Average loss: 3.3655 Iteration: 928; Percent complete: 23.2%; Average loss: 3.5199 Iteration: 929; Percent complete: 23.2%; Average loss: 3.4586 Iteration: 930; Percent complete: 23.2%; Average loss: 3.6430 Iteration: 931; Percent complete: 23.3%; Average loss: 3.5307 Iteration: 932; Percent complete: 23.3%; Average loss: 3.5173 Iteration: 933; Percent complete: 23.3%; Average loss: 3.4876 Iteration: 934; Percent complete: 23.4%; Average loss: 2.9487 Iteration: 935; Percent complete: 23.4%; Average loss: 3.4122 Iteration: 936; Percent complete: 23.4%; Average loss: 3.7058 Iteration: 937; Percent complete: 23.4%; Average loss: 3.4809 Iteration: 938; Percent complete: 23.4%; Average loss: 3.1896 Iteration: 939; Percent complete: 23.5%; Average loss: 3.5021 Iteration: 940; Percent complete: 23.5%; Average loss: 3.4406 Iteration: 941; Percent complete: 23.5%; Average loss: 3.4014 Iteration: 942; Percent complete: 23.5%; Average loss: 3.4314 Iteration: 943; Percent complete: 23.6%; Average loss: 3.3873 Iteration: 944; Percent complete: 23.6%; Average loss: 3.5935 Iteration: 945; Percent complete: 23.6%; Average loss: 3.4803 Iteration: 946; Percent complete: 23.6%; Average loss: 3.3311 Iteration: 947; Percent complete: 23.7%; Average loss: 3.3063 Iteration: 948; Percent complete: 23.7%; Average loss: 3.4417 Iteration: 949; Percent complete: 23.7%; Average loss: 3.3825 Iteration: 950; Percent complete: 23.8%; Average loss: 3.4158 Iteration: 951; Percent complete: 23.8%; Average loss: 3.6924 Iteration: 952; Percent complete: 23.8%; Average loss: 3.4218 Iteration: 953; Percent complete: 23.8%; Average loss: 3.5499 Iteration: 954; Percent complete: 23.8%; Average loss: 3.6188 Iteration: 955; Percent complete: 23.9%; Average loss: 3.5373 Iteration: 956; Percent complete: 23.9%; Average loss: 3.5150 Iteration: 957; Percent complete: 23.9%; Average loss: 3.4634 Iteration: 958; Percent complete: 23.9%; Average loss: 3.6224 Iteration: 959; Percent complete: 24.0%; Average loss: 3.3836 Iteration: 960; Percent complete: 24.0%; Average loss: 3.7019 Iteration: 961; Percent complete: 24.0%; Average loss: 3.3636 Iteration: 962; Percent complete: 24.1%; Average loss: 3.7514 Iteration: 963; Percent complete: 24.1%; Average loss: 3.5581 Iteration: 964; Percent complete: 24.1%; Average loss: 3.4538 Iteration: 965; Percent complete: 24.1%; Average loss: 3.5229 Iteration: 966; Percent complete: 24.1%; Average loss: 3.5894 Iteration: 967; Percent complete: 24.2%; Average loss: 3.4640 Iteration: 968; Percent complete: 24.2%; Average loss: 3.3538 Iteration: 969; Percent complete: 24.2%; Average loss: 3.5148 Iteration: 970; Percent complete: 24.2%; Average loss: 3.2087 Iteration: 971; Percent complete: 24.3%; Average loss: 3.7640 Iteration: 972; Percent complete: 24.3%; Average loss: 3.3739 Iteration: 973; Percent complete: 24.3%; Average loss: 3.5593 Iteration: 974; Percent complete: 24.3%; Average loss: 3.1880 Iteration: 975; Percent complete: 24.4%; Average loss: 3.4811 Iteration: 976; Percent complete: 24.4%; Average loss: 3.7945 Iteration: 977; Percent complete: 24.4%; Average loss: 3.4331 Iteration: 978; Percent complete: 24.4%; Average loss: 3.4066 Iteration: 979; Percent complete: 24.5%; Average loss: 3.3988 Iteration: 980; Percent complete: 24.5%; Average loss: 3.0628 Iteration: 981; Percent complete: 24.5%; Average loss: 3.5791 Iteration: 982; Percent complete: 24.6%; Average loss: 3.5643 Iteration: 983; Percent complete: 24.6%; Average loss: 3.8118 Iteration: 984; Percent complete: 24.6%; Average loss: 3.3499 Iteration: 985; Percent complete: 24.6%; Average loss: 3.4024 Iteration: 986; Percent complete: 24.6%; Average loss: 3.6541 Iteration: 987; Percent complete: 24.7%; Average loss: 3.3891 Iteration: 988; Percent complete: 24.7%; Average loss: 3.5269 Iteration: 989; Percent complete: 24.7%; Average loss: 3.4003 Iteration: 990; Percent complete: 24.8%; Average loss: 3.5190 Iteration: 991; Percent complete: 24.8%; Average loss: 3.3025 Iteration: 992; Percent complete: 24.8%; Average loss: 3.4593 Iteration: 993; Percent complete: 24.8%; Average loss: 3.6337 Iteration: 994; Percent complete: 24.9%; Average loss: 3.4296 Iteration: 995; Percent complete: 24.9%; Average loss: 3.5283 Iteration: 996; Percent complete: 24.9%; Average loss: 3.1733 Iteration: 997; Percent complete: 24.9%; Average loss: 3.4357 Iteration: 998; Percent complete: 24.9%; Average loss: 3.4712 Iteration: 999; Percent complete: 25.0%; Average loss: 3.4039 Iteration: 1000; Percent complete: 25.0%; Average loss: 3.5663 Iteration: 1001; Percent complete: 25.0%; Average loss: 3.4860 Iteration: 1002; Percent complete: 25.1%; Average loss: 3.4221 Iteration: 1003; Percent complete: 25.1%; Average loss: 3.4381 Iteration: 1004; Percent complete: 25.1%; Average loss: 3.3586 Iteration: 1005; Percent complete: 25.1%; Average loss: 3.4301 Iteration: 1006; Percent complete: 25.1%; Average loss: 3.2296 Iteration: 1007; Percent complete: 25.2%; Average loss: 3.2663 Iteration: 1008; Percent complete: 25.2%; Average loss: 3.3431 Iteration: 1009; Percent complete: 25.2%; Average loss: 3.6042 Iteration: 1010; Percent complete: 25.2%; Average loss: 3.1710 Iteration: 1011; Percent complete: 25.3%; Average loss: 3.4353 Iteration: 1012; Percent complete: 25.3%; Average loss: 3.5919 Iteration: 1013; Percent complete: 25.3%; Average loss: 3.4451 Iteration: 1014; Percent complete: 25.4%; Average loss: 3.7362 Iteration: 1015; Percent complete: 25.4%; Average loss: 3.2250 Iteration: 1016; Percent complete: 25.4%; Average loss: 3.6916 Iteration: 1017; Percent complete: 25.4%; Average loss: 3.6236 Iteration: 1018; Percent complete: 25.4%; Average loss: 3.4623 Iteration: 1019; Percent complete: 25.5%; Average loss: 3.5010 Iteration: 1020; Percent complete: 25.5%; Average loss: 3.5945 Iteration: 1021; Percent complete: 25.5%; Average loss: 3.4453 Iteration: 1022; Percent complete: 25.6%; Average loss: 3.4877 Iteration: 1023; Percent complete: 25.6%; Average loss: 3.4867 Iteration: 1024; Percent complete: 25.6%; Average loss: 3.2335 Iteration: 1025; Percent complete: 25.6%; Average loss: 3.4177 Iteration: 1026; Percent complete: 25.7%; Average loss: 3.4645 Iteration: 1027; Percent complete: 25.7%; Average loss: 3.3871 Iteration: 1028; Percent complete: 25.7%; Average loss: 3.3635 Iteration: 1029; Percent complete: 25.7%; Average loss: 3.4557 Iteration: 1030; Percent complete: 25.8%; Average loss: 3.3951 Iteration: 1031; Percent complete: 25.8%; Average loss: 3.3512 Iteration: 1032; Percent complete: 25.8%; Average loss: 3.4516 Iteration: 1033; Percent complete: 25.8%; Average loss: 3.4390 Iteration: 1034; Percent complete: 25.9%; Average loss: 3.5218 Iteration: 1035; Percent complete: 25.9%; Average loss: 3.4086 Iteration: 1036; Percent complete: 25.9%; Average loss: 3.4409 Iteration: 1037; Percent complete: 25.9%; Average loss: 3.2385 Iteration: 1038; Percent complete: 25.9%; Average loss: 3.4512 Iteration: 1039; Percent complete: 26.0%; Average loss: 3.2508 Iteration: 1040; Percent complete: 26.0%; Average loss: 3.4307 Iteration: 1041; Percent complete: 26.0%; Average loss: 3.5482 Iteration: 1042; Percent complete: 26.1%; Average loss: 3.4388 Iteration: 1043; Percent complete: 26.1%; Average loss: 3.4045 Iteration: 1044; Percent complete: 26.1%; Average loss: 3.4266 Iteration: 1045; Percent complete: 26.1%; Average loss: 3.4709 Iteration: 1046; Percent complete: 26.2%; Average loss: 3.5244 Iteration: 1047; Percent complete: 26.2%; Average loss: 3.4664 Iteration: 1048; Percent complete: 26.2%; Average loss: 3.3850 Iteration: 1049; Percent complete: 26.2%; Average loss: 3.4453 Iteration: 1050; Percent complete: 26.2%; Average loss: 3.5281 Iteration: 1051; Percent complete: 26.3%; Average loss: 3.5603 Iteration: 1052; Percent complete: 26.3%; Average loss: 3.3970 Iteration: 1053; Percent complete: 26.3%; Average loss: 3.4716 Iteration: 1054; Percent complete: 26.4%; Average loss: 3.3950 Iteration: 1055; Percent complete: 26.4%; Average loss: 3.3434 Iteration: 1056; Percent complete: 26.4%; Average loss: 3.5670 Iteration: 1057; Percent complete: 26.4%; Average loss: 3.3459 Iteration: 1058; Percent complete: 26.5%; Average loss: 3.6563 Iteration: 1059; Percent complete: 26.5%; Average loss: 3.5923 Iteration: 1060; Percent complete: 26.5%; Average loss: 3.3265 Iteration: 1061; Percent complete: 26.5%; Average loss: 3.7156 Iteration: 1062; Percent complete: 26.6%; Average loss: 3.5307 Iteration: 1063; Percent complete: 26.6%; Average loss: 3.5796 Iteration: 1064; Percent complete: 26.6%; Average loss: 3.4735 Iteration: 1065; Percent complete: 26.6%; Average loss: 3.4593 Iteration: 1066; Percent complete: 26.7%; Average loss: 3.5445 Iteration: 1067; Percent complete: 26.7%; Average loss: 3.5653 Iteration: 1068; Percent complete: 26.7%; Average loss: 3.3510 Iteration: 1069; Percent complete: 26.7%; Average loss: 3.3541 Iteration: 1070; Percent complete: 26.8%; Average loss: 3.4548 Iteration: 1071; Percent complete: 26.8%; Average loss: 3.5325 Iteration: 1072; Percent complete: 26.8%; Average loss: 3.4142 Iteration: 1073; Percent complete: 26.8%; Average loss: 3.3928 Iteration: 1074; Percent complete: 26.9%; Average loss: 3.5507 Iteration: 1075; Percent complete: 26.9%; Average loss: 3.1941 Iteration: 1076; Percent complete: 26.9%; Average loss: 3.3946 Iteration: 1077; Percent complete: 26.9%; Average loss: 3.3145 Iteration: 1078; Percent complete: 27.0%; Average loss: 3.5440 Iteration: 1079; Percent complete: 27.0%; Average loss: 3.4583 Iteration: 1080; Percent complete: 27.0%; Average loss: 3.5983 Iteration: 1081; Percent complete: 27.0%; Average loss: 3.0942 Iteration: 1082; Percent complete: 27.1%; Average loss: 3.1122 Iteration: 1083; Percent complete: 27.1%; Average loss: 3.5395 Iteration: 1084; Percent complete: 27.1%; Average loss: 3.5856 Iteration: 1085; Percent complete: 27.1%; Average loss: 3.4458 Iteration: 1086; Percent complete: 27.2%; Average loss: 3.5888 Iteration: 1087; Percent complete: 27.2%; Average loss: 3.5087 Iteration: 1088; Percent complete: 27.2%; Average loss: 3.4836 Iteration: 1089; Percent complete: 27.2%; Average loss: 3.5382 Iteration: 1090; Percent complete: 27.3%; Average loss: 3.3419 Iteration: 1091; Percent complete: 27.3%; Average loss: 3.5830 Iteration: 1092; Percent complete: 27.3%; Average loss: 3.5201 Iteration: 1093; Percent complete: 27.3%; Average loss: 3.4047 Iteration: 1094; Percent complete: 27.4%; Average loss: 3.5882 Iteration: 1095; Percent complete: 27.4%; Average loss: 3.4484 Iteration: 1096; Percent complete: 27.4%; Average loss: 3.5041 Iteration: 1097; Percent complete: 27.4%; Average loss: 3.6441 Iteration: 1098; Percent complete: 27.5%; Average loss: 3.2409 Iteration: 1099; Percent complete: 27.5%; Average loss: 3.1179 Iteration: 1100; Percent complete: 27.5%; Average loss: 3.2562 Iteration: 1101; Percent complete: 27.5%; Average loss: 3.2162 Iteration: 1102; Percent complete: 27.6%; Average loss: 3.4767 Iteration: 1103; Percent complete: 27.6%; Average loss: 3.4880 Iteration: 1104; Percent complete: 27.6%; Average loss: 3.4587 Iteration: 1105; Percent complete: 27.6%; Average loss: 3.0391 Iteration: 1106; Percent complete: 27.7%; Average loss: 3.3809 Iteration: 1107; Percent complete: 27.7%; Average loss: 3.4488 Iteration: 1108; Percent complete: 27.7%; Average loss: 3.4618 Iteration: 1109; Percent complete: 27.7%; Average loss: 3.5626 Iteration: 1110; Percent complete: 27.8%; Average loss: 3.5860 Iteration: 1111; Percent complete: 27.8%; Average loss: 3.3613 Iteration: 1112; Percent complete: 27.8%; Average loss: 3.4263 Iteration: 1113; Percent complete: 27.8%; Average loss: 3.4340 Iteration: 1114; Percent complete: 27.9%; Average loss: 3.5060 Iteration: 1115; Percent complete: 27.9%; Average loss: 3.3153 Iteration: 1116; Percent complete: 27.9%; Average loss: 3.5116 Iteration: 1117; Percent complete: 27.9%; Average loss: 3.5406 Iteration: 1118; Percent complete: 28.0%; Average loss: 3.3814 Iteration: 1119; Percent complete: 28.0%; Average loss: 3.3884 Iteration: 1120; Percent complete: 28.0%; Average loss: 3.3617 Iteration: 1121; Percent complete: 28.0%; Average loss: 3.3726 Iteration: 1122; Percent complete: 28.1%; Average loss: 3.3063 Iteration: 1123; Percent complete: 28.1%; Average loss: 3.3263 Iteration: 1124; Percent complete: 28.1%; Average loss: 3.4200 Iteration: 1125; Percent complete: 28.1%; Average loss: 3.4216 Iteration: 1126; Percent complete: 28.1%; Average loss: 3.3722 Iteration: 1127; Percent complete: 28.2%; Average loss: 3.5266 Iteration: 1128; Percent complete: 28.2%; Average loss: 3.1939 Iteration: 1129; Percent complete: 28.2%; Average loss: 3.3266 Iteration: 1130; Percent complete: 28.2%; Average loss: 3.5089 Iteration: 1131; Percent complete: 28.3%; Average loss: 3.4705 Iteration: 1132; Percent complete: 28.3%; Average loss: 3.6773 Iteration: 1133; Percent complete: 28.3%; Average loss: 3.3624 Iteration: 1134; Percent complete: 28.3%; Average loss: 3.6978 Iteration: 1135; Percent complete: 28.4%; Average loss: 3.3904 Iteration: 1136; Percent complete: 28.4%; Average loss: 3.4204 Iteration: 1137; Percent complete: 28.4%; Average loss: 3.4899 Iteration: 1138; Percent complete: 28.4%; Average loss: 3.5632 Iteration: 1139; Percent complete: 28.5%; Average loss: 3.2679 Iteration: 1140; Percent complete: 28.5%; Average loss: 3.4248 Iteration: 1141; Percent complete: 28.5%; Average loss: 3.4497 Iteration: 1142; Percent complete: 28.5%; Average loss: 3.2866 Iteration: 1143; Percent complete: 28.6%; Average loss: 3.6605 Iteration: 1144; Percent complete: 28.6%; Average loss: 3.2478 Iteration: 1145; Percent complete: 28.6%; Average loss: 3.6495 Iteration: 1146; Percent complete: 28.6%; Average loss: 3.3444 Iteration: 1147; Percent complete: 28.7%; Average loss: 3.4633 Iteration: 1148; Percent complete: 28.7%; Average loss: 3.3532 Iteration: 1149; Percent complete: 28.7%; Average loss: 3.2804 Iteration: 1150; Percent complete: 28.7%; Average loss: 3.2153 Iteration: 1151; Percent complete: 28.8%; Average loss: 3.4235 Iteration: 1152; Percent complete: 28.8%; Average loss: 3.3207 Iteration: 1153; Percent complete: 28.8%; Average loss: 3.3694 Iteration: 1154; Percent complete: 28.8%; Average loss: 3.3925 Iteration: 1155; Percent complete: 28.9%; Average loss: 3.6858 Iteration: 1156; Percent complete: 28.9%; Average loss: 3.3758 Iteration: 1157; Percent complete: 28.9%; Average loss: 3.5472 Iteration: 1158; Percent complete: 28.9%; Average loss: 3.4650 Iteration: 1159; Percent complete: 29.0%; Average loss: 3.3540 Iteration: 1160; Percent complete: 29.0%; Average loss: 3.4304 Iteration: 1161; Percent complete: 29.0%; Average loss: 3.6803 Iteration: 1162; Percent complete: 29.0%; Average loss: 3.4414 Iteration: 1163; Percent complete: 29.1%; Average loss: 3.3418 Iteration: 1164; Percent complete: 29.1%; Average loss: 3.3577 Iteration: 1165; Percent complete: 29.1%; Average loss: 3.2575 Iteration: 1166; Percent complete: 29.1%; Average loss: 3.4152 Iteration: 1167; Percent complete: 29.2%; Average loss: 3.2000 Iteration: 1168; Percent complete: 29.2%; Average loss: 3.1974 Iteration: 1169; Percent complete: 29.2%; Average loss: 3.3756 Iteration: 1170; Percent complete: 29.2%; Average loss: 3.3544 Iteration: 1171; Percent complete: 29.3%; Average loss: 3.5103 Iteration: 1172; Percent complete: 29.3%; Average loss: 3.1884 Iteration: 1173; Percent complete: 29.3%; Average loss: 3.3170 Iteration: 1174; Percent complete: 29.3%; Average loss: 3.1747 Iteration: 1175; Percent complete: 29.4%; Average loss: 3.6631 Iteration: 1176; Percent complete: 29.4%; Average loss: 3.4812 Iteration: 1177; Percent complete: 29.4%; Average loss: 3.5834 Iteration: 1178; Percent complete: 29.4%; Average loss: 3.4349 Iteration: 1179; Percent complete: 29.5%; Average loss: 3.3313 Iteration: 1180; Percent complete: 29.5%; Average loss: 3.5612 Iteration: 1181; Percent complete: 29.5%; Average loss: 3.3368 Iteration: 1182; Percent complete: 29.5%; Average loss: 3.4015 Iteration: 1183; Percent complete: 29.6%; Average loss: 3.3376 Iteration: 1184; Percent complete: 29.6%; Average loss: 3.3752 Iteration: 1185; Percent complete: 29.6%; Average loss: 3.2451 Iteration: 1186; Percent complete: 29.6%; Average loss: 3.5754 Iteration: 1187; Percent complete: 29.7%; Average loss: 3.2641 Iteration: 1188; Percent complete: 29.7%; Average loss: 3.6270 Iteration: 1189; Percent complete: 29.7%; Average loss: 3.4868 Iteration: 1190; Percent complete: 29.8%; Average loss: 3.3552 Iteration: 1191; Percent complete: 29.8%; Average loss: 3.6588 Iteration: 1192; Percent complete: 29.8%; Average loss: 3.5883 Iteration: 1193; Percent complete: 29.8%; Average loss: 3.4549 Iteration: 1194; Percent complete: 29.8%; Average loss: 3.3648 Iteration: 1195; Percent complete: 29.9%; Average loss: 3.2347 Iteration: 1196; Percent complete: 29.9%; Average loss: 3.4347 Iteration: 1197; Percent complete: 29.9%; Average loss: 3.3581 Iteration: 1198; Percent complete: 29.9%; Average loss: 3.3388 Iteration: 1199; Percent complete: 30.0%; Average loss: 3.3669 Iteration: 1200; Percent complete: 30.0%; Average loss: 3.4407 Iteration: 1201; Percent complete: 30.0%; Average loss: 3.3671 Iteration: 1202; Percent complete: 30.0%; Average loss: 3.1826 Iteration: 1203; Percent complete: 30.1%; Average loss: 3.2893 Iteration: 1204; Percent complete: 30.1%; Average loss: 3.4997 Iteration: 1205; Percent complete: 30.1%; Average loss: 3.3388 Iteration: 1206; Percent complete: 30.1%; Average loss: 3.4029 Iteration: 1207; Percent complete: 30.2%; Average loss: 3.3448 Iteration: 1208; Percent complete: 30.2%; Average loss: 3.4881 Iteration: 1209; Percent complete: 30.2%; Average loss: 3.3277 Iteration: 1210; Percent complete: 30.2%; Average loss: 3.2183 Iteration: 1211; Percent complete: 30.3%; Average loss: 3.3119 Iteration: 1212; Percent complete: 30.3%; Average loss: 3.3263 Iteration: 1213; Percent complete: 30.3%; Average loss: 3.5276 Iteration: 1214; Percent complete: 30.3%; Average loss: 3.1515 Iteration: 1215; Percent complete: 30.4%; Average loss: 3.0550 Iteration: 1216; Percent complete: 30.4%; Average loss: 3.4661 Iteration: 1217; Percent complete: 30.4%; Average loss: 3.4339 Iteration: 1218; Percent complete: 30.4%; Average loss: 3.3765 Iteration: 1219; Percent complete: 30.5%; Average loss: 3.4252 Iteration: 1220; Percent complete: 30.5%; Average loss: 3.0560 Iteration: 1221; Percent complete: 30.5%; Average loss: 3.2484 Iteration: 1222; Percent complete: 30.6%; Average loss: 3.3874 Iteration: 1223; Percent complete: 30.6%; Average loss: 3.4277 Iteration: 1224; Percent complete: 30.6%; Average loss: 3.4135 Iteration: 1225; Percent complete: 30.6%; Average loss: 3.2142 Iteration: 1226; Percent complete: 30.6%; Average loss: 3.5871 Iteration: 1227; Percent complete: 30.7%; Average loss: 3.3130 Iteration: 1228; Percent complete: 30.7%; Average loss: 3.6487 Iteration: 1229; Percent complete: 30.7%; Average loss: 3.4318 Iteration: 1230; Percent complete: 30.8%; Average loss: 3.3077 Iteration: 1231; Percent complete: 30.8%; Average loss: 3.3341 Iteration: 1232; Percent complete: 30.8%; Average loss: 3.3609 Iteration: 1233; Percent complete: 30.8%; Average loss: 3.5503 Iteration: 1234; Percent complete: 30.9%; Average loss: 3.3561 Iteration: 1235; Percent complete: 30.9%; Average loss: 3.3971 Iteration: 1236; Percent complete: 30.9%; Average loss: 3.5286 Iteration: 1237; Percent complete: 30.9%; Average loss: 3.3944 Iteration: 1238; Percent complete: 30.9%; Average loss: 2.9974 Iteration: 1239; Percent complete: 31.0%; Average loss: 3.4920 Iteration: 1240; Percent complete: 31.0%; Average loss: 3.5474 Iteration: 1241; Percent complete: 31.0%; Average loss: 3.1395 Iteration: 1242; Percent complete: 31.1%; Average loss: 3.4750 Iteration: 1243; Percent complete: 31.1%; Average loss: 3.3183 Iteration: 1244; Percent complete: 31.1%; Average loss: 3.2689 Iteration: 1245; Percent complete: 31.1%; Average loss: 3.4059 Iteration: 1246; Percent complete: 31.1%; Average loss: 3.5174 Iteration: 1247; Percent complete: 31.2%; Average loss: 3.2573 Iteration: 1248; Percent complete: 31.2%; Average loss: 3.3343 Iteration: 1249; Percent complete: 31.2%; Average loss: 3.5340 Iteration: 1250; Percent complete: 31.2%; Average loss: 3.2655 Iteration: 1251; Percent complete: 31.3%; Average loss: 3.5286 Iteration: 1252; Percent complete: 31.3%; Average loss: 3.2812 Iteration: 1253; Percent complete: 31.3%; Average loss: 3.2007 Iteration: 1254; Percent complete: 31.4%; Average loss: 3.4681 Iteration: 1255; Percent complete: 31.4%; Average loss: 3.4157 Iteration: 1256; Percent complete: 31.4%; Average loss: 3.4223 Iteration: 1257; Percent complete: 31.4%; Average loss: 3.6472 Iteration: 1258; Percent complete: 31.4%; Average loss: 3.1425 Iteration: 1259; Percent complete: 31.5%; Average loss: 3.3842 Iteration: 1260; Percent complete: 31.5%; Average loss: 3.4845 Iteration: 1261; Percent complete: 31.5%; Average loss: 3.2679 Iteration: 1262; Percent complete: 31.6%; Average loss: 3.6206 Iteration: 1263; Percent complete: 31.6%; Average loss: 3.5342 Iteration: 1264; Percent complete: 31.6%; Average loss: 3.4540 Iteration: 1265; Percent complete: 31.6%; Average loss: 3.5560 Iteration: 1266; Percent complete: 31.6%; Average loss: 3.4846 Iteration: 1267; Percent complete: 31.7%; Average loss: 3.3796 Iteration: 1268; Percent complete: 31.7%; Average loss: 3.4585 Iteration: 1269; Percent complete: 31.7%; Average loss: 3.3672 Iteration: 1270; Percent complete: 31.8%; Average loss: 3.3975 Iteration: 1271; Percent complete: 31.8%; Average loss: 3.2523 Iteration: 1272; Percent complete: 31.8%; Average loss: 2.9734 Iteration: 1273; Percent complete: 31.8%; Average loss: 3.6262 Iteration: 1274; Percent complete: 31.9%; Average loss: 3.5329 Iteration: 1275; Percent complete: 31.9%; Average loss: 3.3479 Iteration: 1276; Percent complete: 31.9%; Average loss: 3.4467 Iteration: 1277; Percent complete: 31.9%; Average loss: 3.4138 Iteration: 1278; Percent complete: 31.9%; Average loss: 3.0556 Iteration: 1279; Percent complete: 32.0%; Average loss: 3.4014 Iteration: 1280; Percent complete: 32.0%; Average loss: 3.2318 Iteration: 1281; Percent complete: 32.0%; Average loss: 3.4028 Iteration: 1282; Percent complete: 32.0%; Average loss: 3.2517 Iteration: 1283; Percent complete: 32.1%; Average loss: 3.5744 Iteration: 1284; Percent complete: 32.1%; Average loss: 3.4137 Iteration: 1285; Percent complete: 32.1%; Average loss: 3.1378 Iteration: 1286; Percent complete: 32.1%; Average loss: 3.3295 Iteration: 1287; Percent complete: 32.2%; Average loss: 3.3847 Iteration: 1288; Percent complete: 32.2%; Average loss: 3.2983 Iteration: 1289; Percent complete: 32.2%; Average loss: 3.3535 Iteration: 1290; Percent complete: 32.2%; Average loss: 3.6023 Iteration: 1291; Percent complete: 32.3%; Average loss: 3.2390 Iteration: 1292; Percent complete: 32.3%; Average loss: 3.3975 Iteration: 1293; Percent complete: 32.3%; Average loss: 3.4313 Iteration: 1294; Percent complete: 32.4%; Average loss: 3.5401 Iteration: 1295; Percent complete: 32.4%; Average loss: 3.3055 Iteration: 1296; Percent complete: 32.4%; Average loss: 3.1722 Iteration: 1297; Percent complete: 32.4%; Average loss: 3.3443 Iteration: 1298; Percent complete: 32.5%; Average loss: 3.2980 Iteration: 1299; Percent complete: 32.5%; Average loss: 3.1689 Iteration: 1300; Percent complete: 32.5%; Average loss: 3.2749 Iteration: 1301; Percent complete: 32.5%; Average loss: 3.3225 Iteration: 1302; Percent complete: 32.6%; Average loss: 3.1975 Iteration: 1303; Percent complete: 32.6%; Average loss: 3.1213 Iteration: 1304; Percent complete: 32.6%; Average loss: 3.2309 Iteration: 1305; Percent complete: 32.6%; Average loss: 3.3634 Iteration: 1306; Percent complete: 32.6%; Average loss: 3.3889 Iteration: 1307; Percent complete: 32.7%; Average loss: 3.4591 Iteration: 1308; Percent complete: 32.7%; Average loss: 3.4445 Iteration: 1309; Percent complete: 32.7%; Average loss: 3.0677 Iteration: 1310; Percent complete: 32.8%; Average loss: 3.2511 Iteration: 1311; Percent complete: 32.8%; Average loss: 3.2737 Iteration: 1312; Percent complete: 32.8%; Average loss: 3.0039 Iteration: 1313; Percent complete: 32.8%; Average loss: 3.4812 Iteration: 1314; Percent complete: 32.9%; Average loss: 3.3061 Iteration: 1315; Percent complete: 32.9%; Average loss: 3.4252 Iteration: 1316; Percent complete: 32.9%; Average loss: 3.6758 Iteration: 1317; Percent complete: 32.9%; Average loss: 3.2754 Iteration: 1318; Percent complete: 33.0%; Average loss: 3.3089 Iteration: 1319; Percent complete: 33.0%; Average loss: 3.4613 Iteration: 1320; Percent complete: 33.0%; Average loss: 3.4771 Iteration: 1321; Percent complete: 33.0%; Average loss: 3.2840 Iteration: 1322; Percent complete: 33.1%; Average loss: 3.4861 Iteration: 1323; Percent complete: 33.1%; Average loss: 3.2174 Iteration: 1324; Percent complete: 33.1%; Average loss: 3.5223 Iteration: 1325; Percent complete: 33.1%; Average loss: 3.3416 Iteration: 1326; Percent complete: 33.1%; Average loss: 3.4035 Iteration: 1327; Percent complete: 33.2%; Average loss: 3.2513 Iteration: 1328; Percent complete: 33.2%; Average loss: 3.3275 Iteration: 1329; Percent complete: 33.2%; Average loss: 3.4093 Iteration: 1330; Percent complete: 33.2%; Average loss: 3.2670 Iteration: 1331; Percent complete: 33.3%; Average loss: 3.1213 Iteration: 1332; Percent complete: 33.3%; Average loss: 3.4486 Iteration: 1333; Percent complete: 33.3%; Average loss: 3.3019 Iteration: 1334; Percent complete: 33.4%; Average loss: 3.5809 Iteration: 1335; Percent complete: 33.4%; Average loss: 3.4798 Iteration: 1336; Percent complete: 33.4%; Average loss: 3.3987 Iteration: 1337; Percent complete: 33.4%; Average loss: 3.1237 Iteration: 1338; Percent complete: 33.5%; Average loss: 3.4886 Iteration: 1339; Percent complete: 33.5%; Average loss: 3.3365 Iteration: 1340; Percent complete: 33.5%; Average loss: 3.5348 Iteration: 1341; Percent complete: 33.5%; Average loss: 3.2004 Iteration: 1342; Percent complete: 33.6%; Average loss: 3.4376 Iteration: 1343; Percent complete: 33.6%; Average loss: 3.0981 Iteration: 1344; Percent complete: 33.6%; Average loss: 3.2550 Iteration: 1345; Percent complete: 33.6%; Average loss: 3.5144 Iteration: 1346; Percent complete: 33.7%; Average loss: 3.0407 Iteration: 1347; Percent complete: 33.7%; Average loss: 3.2784 Iteration: 1348; Percent complete: 33.7%; Average loss: 3.3925 Iteration: 1349; Percent complete: 33.7%; Average loss: 3.3973 Iteration: 1350; Percent complete: 33.8%; Average loss: 3.1225 Iteration: 1351; Percent complete: 33.8%; Average loss: 3.1845 Iteration: 1352; Percent complete: 33.8%; Average loss: 3.4488 Iteration: 1353; Percent complete: 33.8%; Average loss: 3.0196 Iteration: 1354; Percent complete: 33.9%; Average loss: 3.2543 Iteration: 1355; Percent complete: 33.9%; Average loss: 3.6144 Iteration: 1356; Percent complete: 33.9%; Average loss: 3.1451 Iteration: 1357; Percent complete: 33.9%; Average loss: 3.1414 Iteration: 1358; Percent complete: 34.0%; Average loss: 3.3212 Iteration: 1359; Percent complete: 34.0%; Average loss: 3.2394 Iteration: 1360; Percent complete: 34.0%; Average loss: 3.4648 Iteration: 1361; Percent complete: 34.0%; Average loss: 3.3884 Iteration: 1362; Percent complete: 34.1%; Average loss: 3.4192 Iteration: 1363; Percent complete: 34.1%; Average loss: 3.3447 Iteration: 1364; Percent complete: 34.1%; Average loss: 3.3857 Iteration: 1365; Percent complete: 34.1%; Average loss: 3.3036 Iteration: 1366; Percent complete: 34.2%; Average loss: 3.3262 Iteration: 1367; Percent complete: 34.2%; Average loss: 3.3150 Iteration: 1368; Percent complete: 34.2%; Average loss: 3.1586 Iteration: 1369; Percent complete: 34.2%; Average loss: 3.3914 Iteration: 1370; Percent complete: 34.2%; Average loss: 3.2304 Iteration: 1371; Percent complete: 34.3%; Average loss: 3.0847 Iteration: 1372; Percent complete: 34.3%; Average loss: 3.3938 Iteration: 1373; Percent complete: 34.3%; Average loss: 3.1713 Iteration: 1374; Percent complete: 34.4%; Average loss: 3.2477 Iteration: 1375; Percent complete: 34.4%; Average loss: 3.3555 Iteration: 1376; Percent complete: 34.4%; Average loss: 3.2410 Iteration: 1377; Percent complete: 34.4%; Average loss: 3.5402 Iteration: 1378; Percent complete: 34.4%; Average loss: 3.3805 Iteration: 1379; Percent complete: 34.5%; Average loss: 3.4212 Iteration: 1380; Percent complete: 34.5%; Average loss: 3.2352 Iteration: 1381; Percent complete: 34.5%; Average loss: 3.0915 Iteration: 1382; Percent complete: 34.5%; Average loss: 3.2685 Iteration: 1383; Percent complete: 34.6%; Average loss: 3.3017 Iteration: 1384; Percent complete: 34.6%; Average loss: 3.3329 Iteration: 1385; Percent complete: 34.6%; Average loss: 3.4868 Iteration: 1386; Percent complete: 34.6%; Average loss: 3.2265 Iteration: 1387; Percent complete: 34.7%; Average loss: 3.2430 Iteration: 1388; Percent complete: 34.7%; Average loss: 3.2491 Iteration: 1389; Percent complete: 34.7%; Average loss: 3.0199 Iteration: 1390; Percent complete: 34.8%; Average loss: 3.2161 Iteration: 1391; Percent complete: 34.8%; Average loss: 3.4210 Iteration: 1392; Percent complete: 34.8%; Average loss: 3.3607 Iteration: 1393; Percent complete: 34.8%; Average loss: 3.2894 Iteration: 1394; Percent complete: 34.8%; Average loss: 3.3219 Iteration: 1395; Percent complete: 34.9%; Average loss: 3.2080 Iteration: 1396; Percent complete: 34.9%; Average loss: 3.2799 Iteration: 1397; Percent complete: 34.9%; Average loss: 3.4536 Iteration: 1398; Percent complete: 34.9%; Average loss: 3.2965 Iteration: 1399; Percent complete: 35.0%; Average loss: 3.1813 Iteration: 1400; Percent complete: 35.0%; Average loss: 3.5143 Iteration: 1401; Percent complete: 35.0%; Average loss: 3.4406 Iteration: 1402; Percent complete: 35.0%; Average loss: 3.3941 Iteration: 1403; Percent complete: 35.1%; Average loss: 3.4463 Iteration: 1404; Percent complete: 35.1%; Average loss: 3.3016 Iteration: 1405; Percent complete: 35.1%; Average loss: 3.4692 Iteration: 1406; Percent complete: 35.1%; Average loss: 3.4379 Iteration: 1407; Percent complete: 35.2%; Average loss: 3.2091 Iteration: 1408; Percent complete: 35.2%; Average loss: 3.2066 Iteration: 1409; Percent complete: 35.2%; Average loss: 3.3599 Iteration: 1410; Percent complete: 35.2%; Average loss: 3.1181 Iteration: 1411; Percent complete: 35.3%; Average loss: 3.1589 Iteration: 1412; Percent complete: 35.3%; Average loss: 3.5311 Iteration: 1413; Percent complete: 35.3%; Average loss: 3.2886 Iteration: 1414; Percent complete: 35.4%; Average loss: 3.6729 Iteration: 1415; Percent complete: 35.4%; Average loss: 3.4982 Iteration: 1416; Percent complete: 35.4%; Average loss: 3.2798 Iteration: 1417; Percent complete: 35.4%; Average loss: 3.3174 Iteration: 1418; Percent complete: 35.4%; Average loss: 3.2231 Iteration: 1419; Percent complete: 35.5%; Average loss: 3.1951 Iteration: 1420; Percent complete: 35.5%; Average loss: 3.1821 Iteration: 1421; Percent complete: 35.5%; Average loss: 3.1486 Iteration: 1422; Percent complete: 35.5%; Average loss: 3.2949 Iteration: 1423; Percent complete: 35.6%; Average loss: 3.4680 Iteration: 1424; Percent complete: 35.6%; Average loss: 3.2448 Iteration: 1425; Percent complete: 35.6%; Average loss: 3.2789 Iteration: 1426; Percent complete: 35.6%; Average loss: 3.2071 Iteration: 1427; Percent complete: 35.7%; Average loss: 3.3338 Iteration: 1428; Percent complete: 35.7%; Average loss: 3.1855 Iteration: 1429; Percent complete: 35.7%; Average loss: 3.0711 Iteration: 1430; Percent complete: 35.8%; Average loss: 3.4677 Iteration: 1431; Percent complete: 35.8%; Average loss: 3.3257 Iteration: 1432; Percent complete: 35.8%; Average loss: 3.3192 Iteration: 1433; Percent complete: 35.8%; Average loss: 3.3700 Iteration: 1434; Percent complete: 35.9%; Average loss: 3.2309 Iteration: 1435; Percent complete: 35.9%; Average loss: 3.4808 Iteration: 1436; Percent complete: 35.9%; Average loss: 3.3657 Iteration: 1437; Percent complete: 35.9%; Average loss: 3.3221 Iteration: 1438; Percent complete: 35.9%; Average loss: 3.4242 Iteration: 1439; Percent complete: 36.0%; Average loss: 3.2358 Iteration: 1440; Percent complete: 36.0%; Average loss: 3.2851 Iteration: 1441; Percent complete: 36.0%; Average loss: 3.4597 Iteration: 1442; Percent complete: 36.0%; Average loss: 3.2381 Iteration: 1443; Percent complete: 36.1%; Average loss: 3.3699 Iteration: 1444; Percent complete: 36.1%; Average loss: 3.3875 Iteration: 1445; Percent complete: 36.1%; Average loss: 3.0991 Iteration: 1446; Percent complete: 36.1%; Average loss: 3.1350 Iteration: 1447; Percent complete: 36.2%; Average loss: 3.3193 Iteration: 1448; Percent complete: 36.2%; Average loss: 3.4063 Iteration: 1449; Percent complete: 36.2%; Average loss: 3.2202 Iteration: 1450; Percent complete: 36.2%; Average loss: 3.1726 Iteration: 1451; Percent complete: 36.3%; Average loss: 3.4203 Iteration: 1452; Percent complete: 36.3%; Average loss: 3.2620 Iteration: 1453; Percent complete: 36.3%; Average loss: 3.5294 Iteration: 1454; Percent complete: 36.4%; Average loss: 3.2776 Iteration: 1455; Percent complete: 36.4%; Average loss: 3.1428 Iteration: 1456; Percent complete: 36.4%; Average loss: 3.3169 Iteration: 1457; Percent complete: 36.4%; Average loss: 3.5686 Iteration: 1458; Percent complete: 36.4%; Average loss: 3.1567 Iteration: 1459; Percent complete: 36.5%; Average loss: 3.1288 Iteration: 1460; Percent complete: 36.5%; Average loss: 3.3469 Iteration: 1461; Percent complete: 36.5%; Average loss: 3.2708 Iteration: 1462; Percent complete: 36.5%; Average loss: 3.1200 Iteration: 1463; Percent complete: 36.6%; Average loss: 3.2511 Iteration: 1464; Percent complete: 36.6%; Average loss: 3.3007 Iteration: 1465; Percent complete: 36.6%; Average loss: 3.0486 Iteration: 1466; Percent complete: 36.6%; Average loss: 3.2747 Iteration: 1467; Percent complete: 36.7%; Average loss: 3.3388 Iteration: 1468; Percent complete: 36.7%; Average loss: 3.2574 Iteration: 1469; Percent complete: 36.7%; Average loss: 3.3458 Iteration: 1470; Percent complete: 36.8%; Average loss: 3.2488 Iteration: 1471; Percent complete: 36.8%; Average loss: 3.3890 Iteration: 1472; Percent complete: 36.8%; Average loss: 3.3811 Iteration: 1473; Percent complete: 36.8%; Average loss: 3.2104 Iteration: 1474; Percent complete: 36.9%; Average loss: 3.2992 Iteration: 1475; Percent complete: 36.9%; Average loss: 3.2733 Iteration: 1476; Percent complete: 36.9%; Average loss: 3.2452 Iteration: 1477; Percent complete: 36.9%; Average loss: 3.5150 Iteration: 1478; Percent complete: 37.0%; Average loss: 3.2732 Iteration: 1479; Percent complete: 37.0%; Average loss: 3.1926 Iteration: 1480; Percent complete: 37.0%; Average loss: 3.2290 Iteration: 1481; Percent complete: 37.0%; Average loss: 3.2110 Iteration: 1482; Percent complete: 37.0%; Average loss: 3.4273 Iteration: 1483; Percent complete: 37.1%; Average loss: 3.2531 Iteration: 1484; Percent complete: 37.1%; Average loss: 3.3462 Iteration: 1485; Percent complete: 37.1%; Average loss: 3.3063 Iteration: 1486; Percent complete: 37.1%; Average loss: 3.2711 Iteration: 1487; Percent complete: 37.2%; Average loss: 3.1543 Iteration: 1488; Percent complete: 37.2%; Average loss: 3.0575 Iteration: 1489; Percent complete: 37.2%; Average loss: 3.1832 Iteration: 1490; Percent complete: 37.2%; Average loss: 3.0756 Iteration: 1491; Percent complete: 37.3%; Average loss: 3.4781 Iteration: 1492; Percent complete: 37.3%; Average loss: 2.9905 Iteration: 1493; Percent complete: 37.3%; Average loss: 3.4383 Iteration: 1494; Percent complete: 37.4%; Average loss: 3.4697 Iteration: 1495; Percent complete: 37.4%; Average loss: 3.2312 Iteration: 1496; Percent complete: 37.4%; Average loss: 3.2017 Iteration: 1497; Percent complete: 37.4%; Average loss: 3.3474 Iteration: 1498; Percent complete: 37.5%; Average loss: 3.4643 Iteration: 1499; Percent complete: 37.5%; Average loss: 3.4358 Iteration: 1500; Percent complete: 37.5%; Average loss: 3.3787 Iteration: 1501; Percent complete: 37.5%; Average loss: 3.4087 Iteration: 1502; Percent complete: 37.5%; Average loss: 3.2863 Iteration: 1503; Percent complete: 37.6%; Average loss: 3.4257 Iteration: 1504; Percent complete: 37.6%; Average loss: 3.3950 Iteration: 1505; Percent complete: 37.6%; Average loss: 3.2940 Iteration: 1506; Percent complete: 37.6%; Average loss: 3.2929 Iteration: 1507; Percent complete: 37.7%; Average loss: 3.4733 Iteration: 1508; Percent complete: 37.7%; Average loss: 3.2563 Iteration: 1509; Percent complete: 37.7%; Average loss: 3.1793 Iteration: 1510; Percent complete: 37.8%; Average loss: 3.8030 Iteration: 1511; Percent complete: 37.8%; Average loss: 3.4268 Iteration: 1512; Percent complete: 37.8%; Average loss: 3.2698 Iteration: 1513; Percent complete: 37.8%; Average loss: 3.6169 Iteration: 1514; Percent complete: 37.9%; Average loss: 3.3839 Iteration: 1515; Percent complete: 37.9%; Average loss: 3.0000 Iteration: 1516; Percent complete: 37.9%; Average loss: 3.3563 Iteration: 1517; Percent complete: 37.9%; Average loss: 3.4910 Iteration: 1518; Percent complete: 38.0%; Average loss: 3.2614 Iteration: 1519; Percent complete: 38.0%; Average loss: 3.3766 Iteration: 1520; Percent complete: 38.0%; Average loss: 3.3139 Iteration: 1521; Percent complete: 38.0%; Average loss: 3.5508 Iteration: 1522; Percent complete: 38.0%; Average loss: 3.2607 Iteration: 1523; Percent complete: 38.1%; Average loss: 3.2799 Iteration: 1524; Percent complete: 38.1%; Average loss: 3.0123 Iteration: 1525; Percent complete: 38.1%; Average loss: 3.3768 Iteration: 1526; Percent complete: 38.1%; Average loss: 3.1140 Iteration: 1527; Percent complete: 38.2%; Average loss: 3.0163 Iteration: 1528; Percent complete: 38.2%; Average loss: 3.4062 Iteration: 1529; Percent complete: 38.2%; Average loss: 3.2375 Iteration: 1530; Percent complete: 38.2%; Average loss: 3.3906 Iteration: 1531; Percent complete: 38.3%; Average loss: 3.4090 Iteration: 1532; Percent complete: 38.3%; Average loss: 3.2578 Iteration: 1533; Percent complete: 38.3%; Average loss: 3.2054 Iteration: 1534; Percent complete: 38.4%; Average loss: 3.5821 Iteration: 1535; Percent complete: 38.4%; Average loss: 3.1024 Iteration: 1536; Percent complete: 38.4%; Average loss: 3.1237 Iteration: 1537; Percent complete: 38.4%; Average loss: 3.4342 Iteration: 1538; Percent complete: 38.5%; Average loss: 3.2372 Iteration: 1539; Percent complete: 38.5%; Average loss: 3.1649 Iteration: 1540; Percent complete: 38.5%; Average loss: 3.1455 Iteration: 1541; Percent complete: 38.5%; Average loss: 3.1163 Iteration: 1542; Percent complete: 38.6%; Average loss: 3.0881 Iteration: 1543; Percent complete: 38.6%; Average loss: 3.4515 Iteration: 1544; Percent complete: 38.6%; Average loss: 3.4537 Iteration: 1545; Percent complete: 38.6%; Average loss: 3.1753 Iteration: 1546; Percent complete: 38.6%; Average loss: 3.4366 Iteration: 1547; Percent complete: 38.7%; Average loss: 3.4521 Iteration: 1548; Percent complete: 38.7%; Average loss: 3.3746 Iteration: 1549; Percent complete: 38.7%; Average loss: 3.2985 Iteration: 1550; Percent complete: 38.8%; Average loss: 3.2109 Iteration: 1551; Percent complete: 38.8%; Average loss: 3.5025 Iteration: 1552; Percent complete: 38.8%; Average loss: 3.2707 Iteration: 1553; Percent complete: 38.8%; Average loss: 3.1182 Iteration: 1554; Percent complete: 38.9%; Average loss: 3.7350 Iteration: 1555; Percent complete: 38.9%; Average loss: 3.1680 Iteration: 1556; Percent complete: 38.9%; Average loss: 3.2588 Iteration: 1557; Percent complete: 38.9%; Average loss: 3.1834 Iteration: 1558; Percent complete: 39.0%; Average loss: 3.3306 Iteration: 1559; Percent complete: 39.0%; Average loss: 3.4235 Iteration: 1560; Percent complete: 39.0%; Average loss: 3.3437 Iteration: 1561; Percent complete: 39.0%; Average loss: 3.0152 Iteration: 1562; Percent complete: 39.1%; Average loss: 3.3194 Iteration: 1563; Percent complete: 39.1%; Average loss: 3.2483 Iteration: 1564; Percent complete: 39.1%; Average loss: 3.4278 Iteration: 1565; Percent complete: 39.1%; Average loss: 3.2791 Iteration: 1566; Percent complete: 39.1%; Average loss: 3.1787 Iteration: 1567; Percent complete: 39.2%; Average loss: 2.9502 Iteration: 1568; Percent complete: 39.2%; Average loss: 3.1615 Iteration: 1569; Percent complete: 39.2%; Average loss: 3.3950 Iteration: 1570; Percent complete: 39.2%; Average loss: 3.2549 Iteration: 1571; Percent complete: 39.3%; Average loss: 3.3850 Iteration: 1572; Percent complete: 39.3%; Average loss: 3.3090 Iteration: 1573; Percent complete: 39.3%; Average loss: 3.3091 Iteration: 1574; Percent complete: 39.4%; Average loss: 3.3229 Iteration: 1575; Percent complete: 39.4%; Average loss: 3.4172 Iteration: 1576; Percent complete: 39.4%; Average loss: 3.1548 Iteration: 1577; Percent complete: 39.4%; Average loss: 3.1743 Iteration: 1578; Percent complete: 39.5%; Average loss: 3.4616 Iteration: 1579; Percent complete: 39.5%; Average loss: 3.1213 Iteration: 1580; Percent complete: 39.5%; Average loss: 3.2366 Iteration: 1581; Percent complete: 39.5%; Average loss: 3.2060 Iteration: 1582; Percent complete: 39.6%; Average loss: 3.2468 Iteration: 1583; Percent complete: 39.6%; Average loss: 3.2961 Iteration: 1584; Percent complete: 39.6%; Average loss: 3.2419 Iteration: 1585; Percent complete: 39.6%; Average loss: 3.2250 Iteration: 1586; Percent complete: 39.6%; Average loss: 3.1643 Iteration: 1587; Percent complete: 39.7%; Average loss: 3.2824 Iteration: 1588; Percent complete: 39.7%; Average loss: 3.1257 Iteration: 1589; Percent complete: 39.7%; Average loss: 3.2625 Iteration: 1590; Percent complete: 39.8%; Average loss: 3.5135 Iteration: 1591; Percent complete: 39.8%; Average loss: 3.1622 Iteration: 1592; Percent complete: 39.8%; Average loss: 3.2903 Iteration: 1593; Percent complete: 39.8%; Average loss: 3.1985 Iteration: 1594; Percent complete: 39.9%; Average loss: 3.2564 Iteration: 1595; Percent complete: 39.9%; Average loss: 3.0278 Iteration: 1596; Percent complete: 39.9%; Average loss: 3.0941 Iteration: 1597; Percent complete: 39.9%; Average loss: 3.3979 Iteration: 1598; Percent complete: 40.0%; Average loss: 3.2421 Iteration: 1599; Percent complete: 40.0%; Average loss: 3.2576 Iteration: 1600; Percent complete: 40.0%; Average loss: 3.2718 Iteration: 1601; Percent complete: 40.0%; Average loss: 3.2138 Iteration: 1602; Percent complete: 40.1%; Average loss: 3.2615 Iteration: 1603; Percent complete: 40.1%; Average loss: 3.3092 Iteration: 1604; Percent complete: 40.1%; Average loss: 3.6510 Iteration: 1605; Percent complete: 40.1%; Average loss: 3.2478 Iteration: 1606; Percent complete: 40.2%; Average loss: 2.9798 Iteration: 1607; Percent complete: 40.2%; Average loss: 3.0058 Iteration: 1608; Percent complete: 40.2%; Average loss: 2.9498 Iteration: 1609; Percent complete: 40.2%; Average loss: 3.6435 Iteration: 1610; Percent complete: 40.2%; Average loss: 3.0641 Iteration: 1611; Percent complete: 40.3%; Average loss: 3.1489 Iteration: 1612; Percent complete: 40.3%; Average loss: 3.4427 Iteration: 1613; Percent complete: 40.3%; Average loss: 3.1455 Iteration: 1614; Percent complete: 40.4%; Average loss: 3.0191 Iteration: 1615; Percent complete: 40.4%; Average loss: 3.3464 Iteration: 1616; Percent complete: 40.4%; Average loss: 3.4799 Iteration: 1617; Percent complete: 40.4%; Average loss: 3.2582 Iteration: 1618; Percent complete: 40.5%; Average loss: 3.1521 Iteration: 1619; Percent complete: 40.5%; Average loss: 3.2810 Iteration: 1620; Percent complete: 40.5%; Average loss: 3.4181 Iteration: 1621; Percent complete: 40.5%; Average loss: 3.2670 Iteration: 1622; Percent complete: 40.6%; Average loss: 3.1492 Iteration: 1623; Percent complete: 40.6%; Average loss: 3.2473 Iteration: 1624; Percent complete: 40.6%; Average loss: 3.4984 Iteration: 1625; Percent complete: 40.6%; Average loss: 3.3018 Iteration: 1626; Percent complete: 40.6%; Average loss: 3.4837 Iteration: 1627; Percent complete: 40.7%; Average loss: 3.0142 Iteration: 1628; Percent complete: 40.7%; Average loss: 3.2089 Iteration: 1629; Percent complete: 40.7%; Average loss: 3.4158 Iteration: 1630; Percent complete: 40.8%; Average loss: 3.1339 Iteration: 1631; Percent complete: 40.8%; Average loss: 3.3157 Iteration: 1632; Percent complete: 40.8%; Average loss: 3.2894 Iteration: 1633; Percent complete: 40.8%; Average loss: 3.0252 Iteration: 1634; Percent complete: 40.8%; Average loss: 3.1440 Iteration: 1635; Percent complete: 40.9%; Average loss: 2.9436 Iteration: 1636; Percent complete: 40.9%; Average loss: 3.3249 Iteration: 1637; Percent complete: 40.9%; Average loss: 3.1869 Iteration: 1638; Percent complete: 40.9%; Average loss: 3.2799 Iteration: 1639; Percent complete: 41.0%; Average loss: 3.2669 Iteration: 1640; Percent complete: 41.0%; Average loss: 3.2807 Iteration: 1641; Percent complete: 41.0%; Average loss: 3.2019 Iteration: 1642; Percent complete: 41.0%; Average loss: 3.3485 Iteration: 1643; Percent complete: 41.1%; Average loss: 3.3730 Iteration: 1644; Percent complete: 41.1%; Average loss: 3.0482 Iteration: 1645; Percent complete: 41.1%; Average loss: 3.2400 Iteration: 1646; Percent complete: 41.1%; Average loss: 3.2989 Iteration: 1647; Percent complete: 41.2%; Average loss: 3.2114 Iteration: 1648; Percent complete: 41.2%; Average loss: 3.3884 Iteration: 1649; Percent complete: 41.2%; Average loss: 3.3033 Iteration: 1650; Percent complete: 41.2%; Average loss: 3.2637 Iteration: 1651; Percent complete: 41.3%; Average loss: 3.5792 Iteration: 1652; Percent complete: 41.3%; Average loss: 3.2925 Iteration: 1653; Percent complete: 41.3%; Average loss: 3.1604 Iteration: 1654; Percent complete: 41.3%; Average loss: 3.2262 Iteration: 1655; Percent complete: 41.4%; Average loss: 3.4258 Iteration: 1656; Percent complete: 41.4%; Average loss: 3.2597 Iteration: 1657; Percent complete: 41.4%; Average loss: 3.1735 Iteration: 1658; Percent complete: 41.4%; Average loss: 3.2378 Iteration: 1659; Percent complete: 41.5%; Average loss: 3.3234 Iteration: 1660; Percent complete: 41.5%; Average loss: 3.3232 Iteration: 1661; Percent complete: 41.5%; Average loss: 3.3722 Iteration: 1662; Percent complete: 41.5%; Average loss: 3.3271 Iteration: 1663; Percent complete: 41.6%; Average loss: 3.1315 Iteration: 1664; Percent complete: 41.6%; Average loss: 3.1676 Iteration: 1665; Percent complete: 41.6%; Average loss: 3.3027 Iteration: 1666; Percent complete: 41.6%; Average loss: 3.3134 Iteration: 1667; Percent complete: 41.7%; Average loss: 3.2870 Iteration: 1668; Percent complete: 41.7%; Average loss: 3.1711 Iteration: 1669; Percent complete: 41.7%; Average loss: 3.0168 Iteration: 1670; Percent complete: 41.8%; Average loss: 3.2001 Iteration: 1671; Percent complete: 41.8%; Average loss: 3.3127 Iteration: 1672; Percent complete: 41.8%; Average loss: 3.3611 Iteration: 1673; Percent complete: 41.8%; Average loss: 3.4268 Iteration: 1674; Percent complete: 41.9%; Average loss: 3.2747 Iteration: 1675; Percent complete: 41.9%; Average loss: 3.2452 Iteration: 1676; Percent complete: 41.9%; Average loss: 3.1003 Iteration: 1677; Percent complete: 41.9%; Average loss: 3.1958 Iteration: 1678; Percent complete: 41.9%; Average loss: 3.2012 Iteration: 1679; Percent complete: 42.0%; Average loss: 3.2298 Iteration: 1680; Percent complete: 42.0%; Average loss: 3.4285 Iteration: 1681; Percent complete: 42.0%; Average loss: 3.3679 Iteration: 1682; Percent complete: 42.0%; Average loss: 3.1272 Iteration: 1683; Percent complete: 42.1%; Average loss: 3.3510 Iteration: 1684; Percent complete: 42.1%; Average loss: 3.1627 Iteration: 1685; Percent complete: 42.1%; Average loss: 2.9975 Iteration: 1686; Percent complete: 42.1%; Average loss: 3.1935 Iteration: 1687; Percent complete: 42.2%; Average loss: 3.5357 Iteration: 1688; Percent complete: 42.2%; Average loss: 3.3664 Iteration: 1689; Percent complete: 42.2%; Average loss: 3.3377 Iteration: 1690; Percent complete: 42.2%; Average loss: 3.2663 Iteration: 1691; Percent complete: 42.3%; Average loss: 3.3431 Iteration: 1692; Percent complete: 42.3%; Average loss: 3.2214 Iteration: 1693; Percent complete: 42.3%; Average loss: 2.9741 Iteration: 1694; Percent complete: 42.4%; Average loss: 2.9993 Iteration: 1695; Percent complete: 42.4%; Average loss: 3.2137 Iteration: 1696; Percent complete: 42.4%; Average loss: 3.1291 Iteration: 1697; Percent complete: 42.4%; Average loss: 3.3592 Iteration: 1698; Percent complete: 42.4%; Average loss: 3.1828 Iteration: 1699; Percent complete: 42.5%; Average loss: 3.1799 Iteration: 1700; Percent complete: 42.5%; Average loss: 3.2324 Iteration: 1701; Percent complete: 42.5%; Average loss: 3.0435 Iteration: 1702; Percent complete: 42.5%; Average loss: 3.2528 Iteration: 1703; Percent complete: 42.6%; Average loss: 3.3709 Iteration: 1704; Percent complete: 42.6%; Average loss: 3.2947 Iteration: 1705; Percent complete: 42.6%; Average loss: 3.4270 Iteration: 1706; Percent complete: 42.6%; Average loss: 3.4836 Iteration: 1707; Percent complete: 42.7%; Average loss: 3.0796 Iteration: 1708; Percent complete: 42.7%; Average loss: 3.2390 Iteration: 1709; Percent complete: 42.7%; Average loss: 3.1241 Iteration: 1710; Percent complete: 42.8%; Average loss: 3.2386 Iteration: 1711; Percent complete: 42.8%; Average loss: 3.1297 Iteration: 1712; Percent complete: 42.8%; Average loss: 3.1383 Iteration: 1713; Percent complete: 42.8%; Average loss: 3.2846 Iteration: 1714; Percent complete: 42.9%; Average loss: 3.1334 Iteration: 1715; Percent complete: 42.9%; Average loss: 3.1653 Iteration: 1716; Percent complete: 42.9%; Average loss: 3.2234 Iteration: 1717; Percent complete: 42.9%; Average loss: 3.4281 Iteration: 1718; Percent complete: 43.0%; Average loss: 3.2577 Iteration: 1719; Percent complete: 43.0%; Average loss: 3.3845 Iteration: 1720; Percent complete: 43.0%; Average loss: 3.3549 Iteration: 1721; Percent complete: 43.0%; Average loss: 3.0498 Iteration: 1722; Percent complete: 43.0%; Average loss: 3.2322 Iteration: 1723; Percent complete: 43.1%; Average loss: 3.3550 Iteration: 1724; Percent complete: 43.1%; Average loss: 3.0554 Iteration: 1725; Percent complete: 43.1%; Average loss: 3.2092 Iteration: 1726; Percent complete: 43.1%; Average loss: 3.2481 Iteration: 1727; Percent complete: 43.2%; Average loss: 3.1736 Iteration: 1728; Percent complete: 43.2%; Average loss: 3.3177 Iteration: 1729; Percent complete: 43.2%; Average loss: 3.1915 Iteration: 1730; Percent complete: 43.2%; Average loss: 3.2987 Iteration: 1731; Percent complete: 43.3%; Average loss: 3.2874 Iteration: 1732; Percent complete: 43.3%; Average loss: 3.1005 Iteration: 1733; Percent complete: 43.3%; Average loss: 3.2507 Iteration: 1734; Percent complete: 43.4%; Average loss: 3.4441 Iteration: 1735; Percent complete: 43.4%; Average loss: 3.3319 Iteration: 1736; Percent complete: 43.4%; Average loss: 3.0157 Iteration: 1737; Percent complete: 43.4%; Average loss: 3.3509 Iteration: 1738; Percent complete: 43.5%; Average loss: 3.2492 Iteration: 1739; Percent complete: 43.5%; Average loss: 2.9239 Iteration: 1740; Percent complete: 43.5%; Average loss: 3.0402 Iteration: 1741; Percent complete: 43.5%; Average loss: 3.1173 Iteration: 1742; Percent complete: 43.5%; Average loss: 3.1064 Iteration: 1743; Percent complete: 43.6%; Average loss: 3.2918 Iteration: 1744; Percent complete: 43.6%; Average loss: 3.2533 Iteration: 1745; Percent complete: 43.6%; Average loss: 3.2701 Iteration: 1746; Percent complete: 43.6%; Average loss: 3.4732 Iteration: 1747; Percent complete: 43.7%; Average loss: 3.2880 Iteration: 1748; Percent complete: 43.7%; Average loss: 3.2259 Iteration: 1749; Percent complete: 43.7%; Average loss: 3.1017 Iteration: 1750; Percent complete: 43.8%; Average loss: 3.1054 Iteration: 1751; Percent complete: 43.8%; Average loss: 3.1389 Iteration: 1752; Percent complete: 43.8%; Average loss: 3.2015 Iteration: 1753; Percent complete: 43.8%; Average loss: 3.3259 Iteration: 1754; Percent complete: 43.9%; Average loss: 3.4447 Iteration: 1755; Percent complete: 43.9%; Average loss: 3.0727 Iteration: 1756; Percent complete: 43.9%; Average loss: 3.1914 Iteration: 1757; Percent complete: 43.9%; Average loss: 3.1512 Iteration: 1758; Percent complete: 44.0%; Average loss: 2.9593 Iteration: 1759; Percent complete: 44.0%; Average loss: 3.2835 Iteration: 1760; Percent complete: 44.0%; Average loss: 3.2525 Iteration: 1761; Percent complete: 44.0%; Average loss: 3.1632 Iteration: 1762; Percent complete: 44.0%; Average loss: 3.4956 Iteration: 1763; Percent complete: 44.1%; Average loss: 3.3708 Iteration: 1764; Percent complete: 44.1%; Average loss: 3.1860 Iteration: 1765; Percent complete: 44.1%; Average loss: 3.2078 Iteration: 1766; Percent complete: 44.1%; Average loss: 3.1362 Iteration: 1767; Percent complete: 44.2%; Average loss: 3.4578 Iteration: 1768; Percent complete: 44.2%; Average loss: 3.4157 Iteration: 1769; Percent complete: 44.2%; Average loss: 3.2813 Iteration: 1770; Percent complete: 44.2%; Average loss: 3.1821 Iteration: 1771; Percent complete: 44.3%; Average loss: 3.2057 Iteration: 1772; Percent complete: 44.3%; Average loss: 3.1172 Iteration: 1773; Percent complete: 44.3%; Average loss: 2.9317 Iteration: 1774; Percent complete: 44.4%; Average loss: 2.9349 Iteration: 1775; Percent complete: 44.4%; Average loss: 3.2215 Iteration: 1776; Percent complete: 44.4%; Average loss: 3.2726 Iteration: 1777; Percent complete: 44.4%; Average loss: 3.3386 Iteration: 1778; Percent complete: 44.5%; Average loss: 3.1465 Iteration: 1779; Percent complete: 44.5%; Average loss: 3.1110 Iteration: 1780; Percent complete: 44.5%; Average loss: 3.2591 Iteration: 1781; Percent complete: 44.5%; Average loss: 2.9822 Iteration: 1782; Percent complete: 44.5%; Average loss: 3.3348 Iteration: 1783; Percent complete: 44.6%; Average loss: 3.3995 Iteration: 1784; Percent complete: 44.6%; Average loss: 3.3591 Iteration: 1785; Percent complete: 44.6%; Average loss: 3.1286 Iteration: 1786; Percent complete: 44.6%; Average loss: 3.1120 Iteration: 1787; Percent complete: 44.7%; Average loss: 3.0109 Iteration: 1788; Percent complete: 44.7%; Average loss: 2.8839 Iteration: 1789; Percent complete: 44.7%; Average loss: 3.1049 Iteration: 1790; Percent complete: 44.8%; Average loss: 2.8331 Iteration: 1791; Percent complete: 44.8%; Average loss: 3.0602 Iteration: 1792; Percent complete: 44.8%; Average loss: 3.1948 Iteration: 1793; Percent complete: 44.8%; Average loss: 3.0458 Iteration: 1794; Percent complete: 44.9%; Average loss: 3.3351 Iteration: 1795; Percent complete: 44.9%; Average loss: 3.2514 Iteration: 1796; Percent complete: 44.9%; Average loss: 3.1592 Iteration: 1797; Percent complete: 44.9%; Average loss: 3.0776 Iteration: 1798; Percent complete: 45.0%; Average loss: 3.2098 Iteration: 1799; Percent complete: 45.0%; Average loss: 3.2896 Iteration: 1800; Percent complete: 45.0%; Average loss: 3.3861 Iteration: 1801; Percent complete: 45.0%; Average loss: 3.0966 Iteration: 1802; Percent complete: 45.1%; Average loss: 3.0362 Iteration: 1803; Percent complete: 45.1%; Average loss: 3.2794 Iteration: 1804; Percent complete: 45.1%; Average loss: 3.2528 Iteration: 1805; Percent complete: 45.1%; Average loss: 3.2493 Iteration: 1806; Percent complete: 45.1%; Average loss: 3.4002 Iteration: 1807; Percent complete: 45.2%; Average loss: 3.2615 Iteration: 1808; Percent complete: 45.2%; Average loss: 3.3052 Iteration: 1809; Percent complete: 45.2%; Average loss: 3.2996 Iteration: 1810; Percent complete: 45.2%; Average loss: 3.3184 Iteration: 1811; Percent complete: 45.3%; Average loss: 3.2691 Iteration: 1812; Percent complete: 45.3%; Average loss: 3.2259 Iteration: 1813; Percent complete: 45.3%; Average loss: 3.2515 Iteration: 1814; Percent complete: 45.4%; Average loss: 3.1523 Iteration: 1815; Percent complete: 45.4%; Average loss: 3.2006 Iteration: 1816; Percent complete: 45.4%; Average loss: 3.1517 Iteration: 1817; Percent complete: 45.4%; Average loss: 3.2016 Iteration: 1818; Percent complete: 45.5%; Average loss: 3.1880 Iteration: 1819; Percent complete: 45.5%; Average loss: 3.2980 Iteration: 1820; Percent complete: 45.5%; Average loss: 2.9916 Iteration: 1821; Percent complete: 45.5%; Average loss: 3.0708 Iteration: 1822; Percent complete: 45.6%; Average loss: 3.1710 Iteration: 1823; Percent complete: 45.6%; Average loss: 3.3389 Iteration: 1824; Percent complete: 45.6%; Average loss: 3.1040 Iteration: 1825; Percent complete: 45.6%; Average loss: 3.0772 Iteration: 1826; Percent complete: 45.6%; Average loss: 3.3079 Iteration: 1827; Percent complete: 45.7%; Average loss: 3.1797 Iteration: 1828; Percent complete: 45.7%; Average loss: 2.9786 Iteration: 1829; Percent complete: 45.7%; Average loss: 3.2141 Iteration: 1830; Percent complete: 45.8%; Average loss: 3.1874 Iteration: 1831; Percent complete: 45.8%; Average loss: 3.1626 Iteration: 1832; Percent complete: 45.8%; Average loss: 3.0971 Iteration: 1833; Percent complete: 45.8%; Average loss: 3.3368 Iteration: 1834; Percent complete: 45.9%; Average loss: 3.3099 Iteration: 1835; Percent complete: 45.9%; Average loss: 3.3386 Iteration: 1836; Percent complete: 45.9%; Average loss: 3.1602 Iteration: 1837; Percent complete: 45.9%; Average loss: 3.2119 Iteration: 1838; Percent complete: 46.0%; Average loss: 3.2893 Iteration: 1839; Percent complete: 46.0%; Average loss: 3.1276 Iteration: 1840; Percent complete: 46.0%; Average loss: 3.3047 Iteration: 1841; Percent complete: 46.0%; Average loss: 3.1746 Iteration: 1842; Percent complete: 46.1%; Average loss: 2.9588 Iteration: 1843; Percent complete: 46.1%; Average loss: 2.9065 Iteration: 1844; Percent complete: 46.1%; Average loss: 3.2164 Iteration: 1845; Percent complete: 46.1%; Average loss: 3.4381 Iteration: 1846; Percent complete: 46.2%; Average loss: 3.2366 Iteration: 1847; Percent complete: 46.2%; Average loss: 3.1332 Iteration: 1848; Percent complete: 46.2%; Average loss: 3.3796 Iteration: 1849; Percent complete: 46.2%; Average loss: 3.3193 Iteration: 1850; Percent complete: 46.2%; Average loss: 3.3362 Iteration: 1851; Percent complete: 46.3%; Average loss: 3.2201 Iteration: 1852; Percent complete: 46.3%; Average loss: 3.3348 Iteration: 1853; Percent complete: 46.3%; Average loss: 3.1306 Iteration: 1854; Percent complete: 46.4%; Average loss: 3.0907 Iteration: 1855; Percent complete: 46.4%; Average loss: 3.3081 Iteration: 1856; Percent complete: 46.4%; Average loss: 3.0356 Iteration: 1857; Percent complete: 46.4%; Average loss: 3.5228 Iteration: 1858; Percent complete: 46.5%; Average loss: 3.2553 Iteration: 1859; Percent complete: 46.5%; Average loss: 3.3103 Iteration: 1860; Percent complete: 46.5%; Average loss: 3.0291 Iteration: 1861; Percent complete: 46.5%; Average loss: 3.1668 Iteration: 1862; Percent complete: 46.6%; Average loss: 3.2585 Iteration: 1863; Percent complete: 46.6%; Average loss: 2.9893 Iteration: 1864; Percent complete: 46.6%; Average loss: 3.2128 Iteration: 1865; Percent complete: 46.6%; Average loss: 3.0928 Iteration: 1866; Percent complete: 46.7%; Average loss: 3.0344 Iteration: 1867; Percent complete: 46.7%; Average loss: 3.4126 Iteration: 1868; Percent complete: 46.7%; Average loss: 3.3740 Iteration: 1869; Percent complete: 46.7%; Average loss: 3.1039 Iteration: 1870; Percent complete: 46.8%; Average loss: 3.2513 Iteration: 1871; Percent complete: 46.8%; Average loss: 3.2575 Iteration: 1872; Percent complete: 46.8%; Average loss: 3.5190 Iteration: 1873; Percent complete: 46.8%; Average loss: 3.2932 Iteration: 1874; Percent complete: 46.9%; Average loss: 2.9075 Iteration: 1875; Percent complete: 46.9%; Average loss: 3.3982 Iteration: 1876; Percent complete: 46.9%; Average loss: 3.4641 Iteration: 1877; Percent complete: 46.9%; Average loss: 3.1062 Iteration: 1878; Percent complete: 46.9%; Average loss: 3.1282 Iteration: 1879; Percent complete: 47.0%; Average loss: 3.0562 Iteration: 1880; Percent complete: 47.0%; Average loss: 3.3387 Iteration: 1881; Percent complete: 47.0%; Average loss: 3.2601 Iteration: 1882; Percent complete: 47.0%; Average loss: 3.2226 Iteration: 1883; Percent complete: 47.1%; Average loss: 3.2420 Iteration: 1884; Percent complete: 47.1%; Average loss: 3.2231 Iteration: 1885; Percent complete: 47.1%; Average loss: 3.4580 Iteration: 1886; Percent complete: 47.1%; Average loss: 3.2114 Iteration: 1887; Percent complete: 47.2%; Average loss: 3.4152 Iteration: 1888; Percent complete: 47.2%; Average loss: 2.9452 Iteration: 1889; Percent complete: 47.2%; Average loss: 3.1793 Iteration: 1890; Percent complete: 47.2%; Average loss: 3.1292 Iteration: 1891; Percent complete: 47.3%; Average loss: 3.3694 Iteration: 1892; Percent complete: 47.3%; Average loss: 3.1904 Iteration: 1893; Percent complete: 47.3%; Average loss: 3.1656 Iteration: 1894; Percent complete: 47.3%; Average loss: 3.5649 Iteration: 1895; Percent complete: 47.4%; Average loss: 3.1332 Iteration: 1896; Percent complete: 47.4%; Average loss: 3.1779 Iteration: 1897; Percent complete: 47.4%; Average loss: 2.9126 Iteration: 1898; Percent complete: 47.4%; Average loss: 3.2786 Iteration: 1899; Percent complete: 47.5%; Average loss: 3.1780 Iteration: 1900; Percent complete: 47.5%; Average loss: 3.1241 Iteration: 1901; Percent complete: 47.5%; Average loss: 3.0604 Iteration: 1902; Percent complete: 47.5%; Average loss: 3.2159 Iteration: 1903; Percent complete: 47.6%; Average loss: 3.0589 Iteration: 1904; Percent complete: 47.6%; Average loss: 3.1785 Iteration: 1905; Percent complete: 47.6%; Average loss: 3.4809 Iteration: 1906; Percent complete: 47.6%; Average loss: 3.2911 Iteration: 1907; Percent complete: 47.7%; Average loss: 3.0926 Iteration: 1908; Percent complete: 47.7%; Average loss: 3.1112 Iteration: 1909; Percent complete: 47.7%; Average loss: 3.0254 Iteration: 1910; Percent complete: 47.8%; Average loss: 3.2402 Iteration: 1911; Percent complete: 47.8%; Average loss: 3.3733 Iteration: 1912; Percent complete: 47.8%; Average loss: 3.0590 Iteration: 1913; Percent complete: 47.8%; Average loss: 3.0652 Iteration: 1914; Percent complete: 47.9%; Average loss: 3.0465 Iteration: 1915; Percent complete: 47.9%; Average loss: 3.1424 Iteration: 1916; Percent complete: 47.9%; Average loss: 3.2041 Iteration: 1917; Percent complete: 47.9%; Average loss: 3.0429 Iteration: 1918; Percent complete: 47.9%; Average loss: 3.3836 Iteration: 1919; Percent complete: 48.0%; Average loss: 3.0848 Iteration: 1920; Percent complete: 48.0%; Average loss: 3.1564 Iteration: 1921; Percent complete: 48.0%; Average loss: 2.9781 Iteration: 1922; Percent complete: 48.0%; Average loss: 3.0593 Iteration: 1923; Percent complete: 48.1%; Average loss: 3.2438 Iteration: 1924; Percent complete: 48.1%; Average loss: 2.9810 Iteration: 1925; Percent complete: 48.1%; Average loss: 3.1861 Iteration: 1926; Percent complete: 48.1%; Average loss: 2.9560 Iteration: 1927; Percent complete: 48.2%; Average loss: 3.0182 Iteration: 1928; Percent complete: 48.2%; Average loss: 3.3158 Iteration: 1929; Percent complete: 48.2%; Average loss: 3.0391 Iteration: 1930; Percent complete: 48.2%; Average loss: 3.0252 Iteration: 1931; Percent complete: 48.3%; Average loss: 3.5084 Iteration: 1932; Percent complete: 48.3%; Average loss: 2.9851 Iteration: 1933; Percent complete: 48.3%; Average loss: 3.2371 Iteration: 1934; Percent complete: 48.4%; Average loss: 3.1108 Iteration: 1935; Percent complete: 48.4%; Average loss: 3.1896 Iteration: 1936; Percent complete: 48.4%; Average loss: 3.1926 Iteration: 1937; Percent complete: 48.4%; Average loss: 3.3028 Iteration: 1938; Percent complete: 48.4%; Average loss: 3.2352 Iteration: 1939; Percent complete: 48.5%; Average loss: 3.2807 Iteration: 1940; Percent complete: 48.5%; Average loss: 3.0505 Iteration: 1941; Percent complete: 48.5%; Average loss: 3.2515 Iteration: 1942; Percent complete: 48.5%; Average loss: 3.1880 Iteration: 1943; Percent complete: 48.6%; Average loss: 3.1691 Iteration: 1944; Percent complete: 48.6%; Average loss: 3.2235 Iteration: 1945; Percent complete: 48.6%; Average loss: 3.0696 Iteration: 1946; Percent complete: 48.6%; Average loss: 3.0966 Iteration: 1947; Percent complete: 48.7%; Average loss: 3.0845 Iteration: 1948; Percent complete: 48.7%; Average loss: 3.1962 Iteration: 1949; Percent complete: 48.7%; Average loss: 2.9677 Iteration: 1950; Percent complete: 48.8%; Average loss: 3.2421 Iteration: 1951; Percent complete: 48.8%; Average loss: 3.2967 Iteration: 1952; Percent complete: 48.8%; Average loss: 3.1936 Iteration: 1953; Percent complete: 48.8%; Average loss: 3.3171 Iteration: 1954; Percent complete: 48.9%; Average loss: 3.2615 Iteration: 1955; Percent complete: 48.9%; Average loss: 3.3924 Iteration: 1956; Percent complete: 48.9%; Average loss: 3.1605 Iteration: 1957; Percent complete: 48.9%; Average loss: 3.2299 Iteration: 1958; Percent complete: 48.9%; Average loss: 3.3483 Iteration: 1959; Percent complete: 49.0%; Average loss: 2.9879 Iteration: 1960; Percent complete: 49.0%; Average loss: 3.0821 Iteration: 1961; Percent complete: 49.0%; Average loss: 3.5265 Iteration: 1962; Percent complete: 49.0%; Average loss: 2.9811 Iteration: 1963; Percent complete: 49.1%; Average loss: 3.3223 Iteration: 1964; Percent complete: 49.1%; Average loss: 3.0793 Iteration: 1965; Percent complete: 49.1%; Average loss: 3.1484 Iteration: 1966; Percent complete: 49.1%; Average loss: 3.1699 Iteration: 1967; Percent complete: 49.2%; Average loss: 3.1385 Iteration: 1968; Percent complete: 49.2%; Average loss: 2.8181 Iteration: 1969; Percent complete: 49.2%; Average loss: 2.9644 Iteration: 1970; Percent complete: 49.2%; Average loss: 2.9992 Iteration: 1971; Percent complete: 49.3%; Average loss: 3.0437 Iteration: 1972; Percent complete: 49.3%; Average loss: 3.3397 Iteration: 1973; Percent complete: 49.3%; Average loss: 2.9381 Iteration: 1974; Percent complete: 49.4%; Average loss: 3.0228 Iteration: 1975; Percent complete: 49.4%; Average loss: 3.1377 Iteration: 1976; Percent complete: 49.4%; Average loss: 2.9613 Iteration: 1977; Percent complete: 49.4%; Average loss: 2.9284 Iteration: 1978; Percent complete: 49.5%; Average loss: 3.1827 Iteration: 1979; Percent complete: 49.5%; Average loss: 3.0035 Iteration: 1980; Percent complete: 49.5%; Average loss: 3.0217 Iteration: 1981; Percent complete: 49.5%; Average loss: 3.0158 Iteration: 1982; Percent complete: 49.5%; Average loss: 3.2760 Iteration: 1983; Percent complete: 49.6%; Average loss: 3.1874 Iteration: 1984; Percent complete: 49.6%; Average loss: 2.9360 Iteration: 1985; Percent complete: 49.6%; Average loss: 3.1183 Iteration: 1986; Percent complete: 49.6%; Average loss: 3.3550 Iteration: 1987; Percent complete: 49.7%; Average loss: 2.9734 Iteration: 1988; Percent complete: 49.7%; Average loss: 2.9575 Iteration: 1989; Percent complete: 49.7%; Average loss: 2.9517 Iteration: 1990; Percent complete: 49.8%; Average loss: 3.2331 Iteration: 1991; Percent complete: 49.8%; Average loss: 3.0261 Iteration: 1992; Percent complete: 49.8%; Average loss: 3.2948 Iteration: 1993; Percent complete: 49.8%; Average loss: 3.1114 Iteration: 1994; Percent complete: 49.9%; Average loss: 3.0950 Iteration: 1995; Percent complete: 49.9%; Average loss: 3.0674 Iteration: 1996; Percent complete: 49.9%; Average loss: 3.1978 Iteration: 1997; Percent complete: 49.9%; Average loss: 3.3170 Iteration: 1998; Percent complete: 50.0%; Average loss: 3.2403 Iteration: 1999; Percent complete: 50.0%; Average loss: 3.2372 Iteration: 2000; Percent complete: 50.0%; Average loss: 3.0948 Iteration: 2001; Percent complete: 50.0%; Average loss: 3.1679 Iteration: 2002; Percent complete: 50.0%; Average loss: 3.2450 Iteration: 2003; Percent complete: 50.1%; Average loss: 3.3554 Iteration: 2004; Percent complete: 50.1%; Average loss: 3.1632 Iteration: 2005; Percent complete: 50.1%; Average loss: 3.0109 Iteration: 2006; Percent complete: 50.1%; Average loss: 3.3300 Iteration: 2007; Percent complete: 50.2%; Average loss: 3.3660 Iteration: 2008; Percent complete: 50.2%; Average loss: 2.8320 Iteration: 2009; Percent complete: 50.2%; Average loss: 3.2955 Iteration: 2010; Percent complete: 50.2%; Average loss: 3.3906 Iteration: 2011; Percent complete: 50.3%; Average loss: 3.3329 Iteration: 2012; Percent complete: 50.3%; Average loss: 2.9503 Iteration: 2013; Percent complete: 50.3%; Average loss: 3.1538 Iteration: 2014; Percent complete: 50.3%; Average loss: 3.1041 Iteration: 2015; Percent complete: 50.4%; Average loss: 3.2869 Iteration: 2016; Percent complete: 50.4%; Average loss: 3.2789 Iteration: 2017; Percent complete: 50.4%; Average loss: 3.0910 Iteration: 2018; Percent complete: 50.4%; Average loss: 3.2442 Iteration: 2019; Percent complete: 50.5%; Average loss: 3.0186 Iteration: 2020; Percent complete: 50.5%; Average loss: 3.2862 Iteration: 2021; Percent complete: 50.5%; Average loss: 3.0960 Iteration: 2022; Percent complete: 50.5%; Average loss: 2.8639 Iteration: 2023; Percent complete: 50.6%; Average loss: 2.9577 Iteration: 2024; Percent complete: 50.6%; Average loss: 3.1608 Iteration: 2025; Percent complete: 50.6%; Average loss: 3.0427 Iteration: 2026; Percent complete: 50.6%; Average loss: 2.9738 Iteration: 2027; Percent complete: 50.7%; Average loss: 3.2464 Iteration: 2028; Percent complete: 50.7%; Average loss: 3.3101 Iteration: 2029; Percent complete: 50.7%; Average loss: 3.2201 Iteration: 2030; Percent complete: 50.7%; Average loss: 3.0019 Iteration: 2031; Percent complete: 50.8%; Average loss: 3.2286 Iteration: 2032; Percent complete: 50.8%; Average loss: 2.7168 Iteration: 2033; Percent complete: 50.8%; Average loss: 3.1248 Iteration: 2034; Percent complete: 50.8%; Average loss: 2.9490 Iteration: 2035; Percent complete: 50.9%; Average loss: 3.2546 Iteration: 2036; Percent complete: 50.9%; Average loss: 3.2033 Iteration: 2037; Percent complete: 50.9%; Average loss: 2.7827 Iteration: 2038; Percent complete: 50.9%; Average loss: 2.8657 Iteration: 2039; Percent complete: 51.0%; Average loss: 2.8744 Iteration: 2040; Percent complete: 51.0%; Average loss: 3.0607 Iteration: 2041; Percent complete: 51.0%; Average loss: 3.2441 Iteration: 2042; Percent complete: 51.0%; Average loss: 3.1797 Iteration: 2043; Percent complete: 51.1%; Average loss: 3.2645 Iteration: 2044; Percent complete: 51.1%; Average loss: 3.1958 Iteration: 2045; Percent complete: 51.1%; Average loss: 3.2104 Iteration: 2046; Percent complete: 51.1%; Average loss: 3.0841 Iteration: 2047; Percent complete: 51.2%; Average loss: 3.1982 Iteration: 2048; Percent complete: 51.2%; Average loss: 2.9255 Iteration: 2049; Percent complete: 51.2%; Average loss: 3.1118 Iteration: 2050; Percent complete: 51.2%; Average loss: 3.0870 Iteration: 2051; Percent complete: 51.3%; Average loss: 3.2206 Iteration: 2052; Percent complete: 51.3%; Average loss: 3.2299 Iteration: 2053; Percent complete: 51.3%; Average loss: 3.1343 Iteration: 2054; Percent complete: 51.3%; Average loss: 3.2771 Iteration: 2055; Percent complete: 51.4%; Average loss: 3.2097 Iteration: 2056; Percent complete: 51.4%; Average loss: 2.9518 Iteration: 2057; Percent complete: 51.4%; Average loss: 2.9970 Iteration: 2058; Percent complete: 51.4%; Average loss: 3.0915 Iteration: 2059; Percent complete: 51.5%; Average loss: 3.1020 Iteration: 2060; Percent complete: 51.5%; Average loss: 3.0828 Iteration: 2061; Percent complete: 51.5%; Average loss: 3.3753 Iteration: 2062; Percent complete: 51.5%; Average loss: 3.2856 Iteration: 2063; Percent complete: 51.6%; Average loss: 2.9598 Iteration: 2064; Percent complete: 51.6%; Average loss: 3.2565 Iteration: 2065; Percent complete: 51.6%; Average loss: 3.2646 Iteration: 2066; Percent complete: 51.6%; Average loss: 3.1432 Iteration: 2067; Percent complete: 51.7%; Average loss: 3.1478 Iteration: 2068; Percent complete: 51.7%; Average loss: 3.4022 Iteration: 2069; Percent complete: 51.7%; Average loss: 2.7791 Iteration: 2070; Percent complete: 51.7%; Average loss: 2.8941 Iteration: 2071; Percent complete: 51.8%; Average loss: 2.9744 Iteration: 2072; Percent complete: 51.8%; Average loss: 3.1501 Iteration: 2073; Percent complete: 51.8%; Average loss: 3.0084 Iteration: 2074; Percent complete: 51.8%; Average loss: 3.1424 Iteration: 2075; Percent complete: 51.9%; Average loss: 3.0028 Iteration: 2076; Percent complete: 51.9%; Average loss: 2.9541 Iteration: 2077; Percent complete: 51.9%; Average loss: 3.1319 Iteration: 2078; Percent complete: 51.9%; Average loss: 3.1585 Iteration: 2079; Percent complete: 52.0%; Average loss: 3.1612 Iteration: 2080; Percent complete: 52.0%; Average loss: 3.2178 Iteration: 2081; Percent complete: 52.0%; Average loss: 3.2430 Iteration: 2082; Percent complete: 52.0%; Average loss: 2.9621 Iteration: 2083; Percent complete: 52.1%; Average loss: 3.0363 Iteration: 2084; Percent complete: 52.1%; Average loss: 3.0865 Iteration: 2085; Percent complete: 52.1%; Average loss: 3.2398 Iteration: 2086; Percent complete: 52.1%; Average loss: 3.2639 Iteration: 2087; Percent complete: 52.2%; Average loss: 3.0393 Iteration: 2088; Percent complete: 52.2%; Average loss: 3.0279 Iteration: 2089; Percent complete: 52.2%; Average loss: 3.0659 Iteration: 2090; Percent complete: 52.2%; Average loss: 3.2966 Iteration: 2091; Percent complete: 52.3%; Average loss: 3.3871 Iteration: 2092; Percent complete: 52.3%; Average loss: 3.0984 Iteration: 2093; Percent complete: 52.3%; Average loss: 3.1026 Iteration: 2094; Percent complete: 52.3%; Average loss: 3.2385 Iteration: 2095; Percent complete: 52.4%; Average loss: 3.1215 Iteration: 2096; Percent complete: 52.4%; Average loss: 3.2411 Iteration: 2097; Percent complete: 52.4%; Average loss: 3.1330 Iteration: 2098; Percent complete: 52.4%; Average loss: 3.4414 Iteration: 2099; Percent complete: 52.5%; Average loss: 3.1125 Iteration: 2100; Percent complete: 52.5%; Average loss: 3.3368 Iteration: 2101; Percent complete: 52.5%; Average loss: 2.8483 Iteration: 2102; Percent complete: 52.5%; Average loss: 3.2521 Iteration: 2103; Percent complete: 52.6%; Average loss: 3.0026 Iteration: 2104; Percent complete: 52.6%; Average loss: 3.3777 Iteration: 2105; Percent complete: 52.6%; Average loss: 3.2493 Iteration: 2106; Percent complete: 52.6%; Average loss: 3.2696 Iteration: 2107; Percent complete: 52.7%; Average loss: 3.0342 Iteration: 2108; Percent complete: 52.7%; Average loss: 3.2469 Iteration: 2109; Percent complete: 52.7%; Average loss: 3.1077 Iteration: 2110; Percent complete: 52.8%; Average loss: 3.1708 Iteration: 2111; Percent complete: 52.8%; Average loss: 3.0840 Iteration: 2112; Percent complete: 52.8%; Average loss: 3.0825 Iteration: 2113; Percent complete: 52.8%; Average loss: 3.2980 Iteration: 2114; Percent complete: 52.8%; Average loss: 3.1970 Iteration: 2115; Percent complete: 52.9%; Average loss: 3.0936 Iteration: 2116; Percent complete: 52.9%; Average loss: 3.2350 Iteration: 2117; Percent complete: 52.9%; Average loss: 3.0002 Iteration: 2118; Percent complete: 52.9%; Average loss: 3.4562 Iteration: 2119; Percent complete: 53.0%; Average loss: 3.1854 Iteration: 2120; Percent complete: 53.0%; Average loss: 2.9463 Iteration: 2121; Percent complete: 53.0%; Average loss: 3.0541 Iteration: 2122; Percent complete: 53.0%; Average loss: 3.0439 Iteration: 2123; Percent complete: 53.1%; Average loss: 2.8719 Iteration: 2124; Percent complete: 53.1%; Average loss: 2.9593 Iteration: 2125; Percent complete: 53.1%; Average loss: 3.3240 Iteration: 2126; Percent complete: 53.1%; Average loss: 3.0959 Iteration: 2127; Percent complete: 53.2%; Average loss: 3.1727 Iteration: 2128; Percent complete: 53.2%; Average loss: 3.1190 Iteration: 2129; Percent complete: 53.2%; Average loss: 3.0990 Iteration: 2130; Percent complete: 53.2%; Average loss: 2.9156 Iteration: 2131; Percent complete: 53.3%; Average loss: 3.0113 Iteration: 2132; Percent complete: 53.3%; Average loss: 3.2384 Iteration: 2133; Percent complete: 53.3%; Average loss: 3.1387 Iteration: 2134; Percent complete: 53.3%; Average loss: 2.9493 Iteration: 2135; Percent complete: 53.4%; Average loss: 2.9489 Iteration: 2136; Percent complete: 53.4%; Average loss: 2.9048 Iteration: 2137; Percent complete: 53.4%; Average loss: 3.1642 Iteration: 2138; Percent complete: 53.4%; Average loss: 3.0334 Iteration: 2139; Percent complete: 53.5%; Average loss: 2.8912 Iteration: 2140; Percent complete: 53.5%; Average loss: 2.9380 Iteration: 2141; Percent complete: 53.5%; Average loss: 2.9770 Iteration: 2142; Percent complete: 53.5%; Average loss: 3.3599 Iteration: 2143; Percent complete: 53.6%; Average loss: 3.2893 Iteration: 2144; Percent complete: 53.6%; Average loss: 2.8479 Iteration: 2145; Percent complete: 53.6%; Average loss: 3.2325 Iteration: 2146; Percent complete: 53.6%; Average loss: 3.0298 Iteration: 2147; Percent complete: 53.7%; Average loss: 3.2080 Iteration: 2148; Percent complete: 53.7%; Average loss: 3.0898 Iteration: 2149; Percent complete: 53.7%; Average loss: 3.1409 Iteration: 2150; Percent complete: 53.8%; Average loss: 3.0729 Iteration: 2151; Percent complete: 53.8%; Average loss: 3.1776 Iteration: 2152; Percent complete: 53.8%; Average loss: 3.0254 Iteration: 2153; Percent complete: 53.8%; Average loss: 3.1132 Iteration: 2154; Percent complete: 53.8%; Average loss: 2.8560 Iteration: 2155; Percent complete: 53.9%; Average loss: 3.0437 Iteration: 2156; Percent complete: 53.9%; Average loss: 3.2873 Iteration: 2157; Percent complete: 53.9%; Average loss: 3.1399 Iteration: 2158; Percent complete: 53.9%; Average loss: 3.1063 Iteration: 2159; Percent complete: 54.0%; Average loss: 3.0799 Iteration: 2160; Percent complete: 54.0%; Average loss: 3.0883 Iteration: 2161; Percent complete: 54.0%; Average loss: 3.0009 Iteration: 2162; Percent complete: 54.0%; Average loss: 2.9617 Iteration: 2163; Percent complete: 54.1%; Average loss: 3.1762 Iteration: 2164; Percent complete: 54.1%; Average loss: 3.2458 Iteration: 2165; Percent complete: 54.1%; Average loss: 3.2864 Iteration: 2166; Percent complete: 54.1%; Average loss: 2.9621 Iteration: 2167; Percent complete: 54.2%; Average loss: 3.0106 Iteration: 2168; Percent complete: 54.2%; Average loss: 3.1427 Iteration: 2169; Percent complete: 54.2%; Average loss: 3.1694 Iteration: 2170; Percent complete: 54.2%; Average loss: 3.2695 Iteration: 2171; Percent complete: 54.3%; Average loss: 3.2051 Iteration: 2172; Percent complete: 54.3%; Average loss: 3.2132 Iteration: 2173; Percent complete: 54.3%; Average loss: 2.8118 Iteration: 2174; Percent complete: 54.4%; Average loss: 2.9121 Iteration: 2175; Percent complete: 54.4%; Average loss: 3.0093 Iteration: 2176; Percent complete: 54.4%; Average loss: 3.0940 Iteration: 2177; Percent complete: 54.4%; Average loss: 3.0706 Iteration: 2178; Percent complete: 54.4%; Average loss: 2.7520 Iteration: 2179; Percent complete: 54.5%; Average loss: 3.3027 Iteration: 2180; Percent complete: 54.5%; Average loss: 3.1408 Iteration: 2181; Percent complete: 54.5%; Average loss: 3.3996 Iteration: 2182; Percent complete: 54.5%; Average loss: 3.1730 Iteration: 2183; Percent complete: 54.6%; Average loss: 3.1633 Iteration: 2184; Percent complete: 54.6%; Average loss: 3.1310 Iteration: 2185; Percent complete: 54.6%; Average loss: 3.0496 Iteration: 2186; Percent complete: 54.6%; Average loss: 3.1090 Iteration: 2187; Percent complete: 54.7%; Average loss: 3.0179 Iteration: 2188; Percent complete: 54.7%; Average loss: 2.9553 Iteration: 2189; Percent complete: 54.7%; Average loss: 3.2923 Iteration: 2190; Percent complete: 54.8%; Average loss: 2.9993 Iteration: 2191; Percent complete: 54.8%; Average loss: 3.1183 Iteration: 2192; Percent complete: 54.8%; Average loss: 2.9901 Iteration: 2193; Percent complete: 54.8%; Average loss: 2.9142 Iteration: 2194; Percent complete: 54.9%; Average loss: 2.8727 Iteration: 2195; Percent complete: 54.9%; Average loss: 2.9957 Iteration: 2196; Percent complete: 54.9%; Average loss: 2.9274 Iteration: 2197; Percent complete: 54.9%; Average loss: 3.2291 Iteration: 2198; Percent complete: 54.9%; Average loss: 2.8206 Iteration: 2199; Percent complete: 55.0%; Average loss: 3.2377 Iteration: 2200; Percent complete: 55.0%; Average loss: 2.9994 Iteration: 2201; Percent complete: 55.0%; Average loss: 3.0647 Iteration: 2202; Percent complete: 55.0%; Average loss: 3.2630 Iteration: 2203; Percent complete: 55.1%; Average loss: 3.1362 Iteration: 2204; Percent complete: 55.1%; Average loss: 3.0853 Iteration: 2205; Percent complete: 55.1%; Average loss: 3.1015 Iteration: 2206; Percent complete: 55.1%; Average loss: 3.0617 Iteration: 2207; Percent complete: 55.2%; Average loss: 3.2293 Iteration: 2208; Percent complete: 55.2%; Average loss: 2.7915 Iteration: 2209; Percent complete: 55.2%; Average loss: 2.9586 Iteration: 2210; Percent complete: 55.2%; Average loss: 3.0097 Iteration: 2211; Percent complete: 55.3%; Average loss: 2.8682 Iteration: 2212; Percent complete: 55.3%; Average loss: 2.7525 Iteration: 2213; Percent complete: 55.3%; Average loss: 3.2411 Iteration: 2214; Percent complete: 55.4%; Average loss: 3.2937 Iteration: 2215; Percent complete: 55.4%; Average loss: 3.0267 Iteration: 2216; Percent complete: 55.4%; Average loss: 3.1063 Iteration: 2217; Percent complete: 55.4%; Average loss: 3.1584 Iteration: 2218; Percent complete: 55.5%; Average loss: 3.1088 Iteration: 2219; Percent complete: 55.5%; Average loss: 3.1933 Iteration: 2220; Percent complete: 55.5%; Average loss: 3.0814 Iteration: 2221; Percent complete: 55.5%; Average loss: 3.1339 Iteration: 2222; Percent complete: 55.5%; Average loss: 2.9838 Iteration: 2223; Percent complete: 55.6%; Average loss: 2.9669 Iteration: 2224; Percent complete: 55.6%; Average loss: 3.1448 Iteration: 2225; Percent complete: 55.6%; Average loss: 3.4302 Iteration: 2226; Percent complete: 55.6%; Average loss: 3.6446 Iteration: 2227; Percent complete: 55.7%; Average loss: 3.0538 Iteration: 2228; Percent complete: 55.7%; Average loss: 3.2038 Iteration: 2229; Percent complete: 55.7%; Average loss: 3.0884 Iteration: 2230; Percent complete: 55.8%; Average loss: 3.2680 Iteration: 2231; Percent complete: 55.8%; Average loss: 3.0989 Iteration: 2232; Percent complete: 55.8%; Average loss: 2.9843 Iteration: 2233; Percent complete: 55.8%; Average loss: 3.0464 Iteration: 2234; Percent complete: 55.9%; Average loss: 3.1471 Iteration: 2235; Percent complete: 55.9%; Average loss: 2.9528 Iteration: 2236; Percent complete: 55.9%; Average loss: 3.1263 Iteration: 2237; Percent complete: 55.9%; Average loss: 3.2092 Iteration: 2238; Percent complete: 56.0%; Average loss: 2.9787 Iteration: 2239; Percent complete: 56.0%; Average loss: 3.1366 Iteration: 2240; Percent complete: 56.0%; Average loss: 2.9162 Iteration: 2241; Percent complete: 56.0%; Average loss: 3.1947 Iteration: 2242; Percent complete: 56.0%; Average loss: 3.1392 Iteration: 2243; Percent complete: 56.1%; Average loss: 3.0669 Iteration: 2244; Percent complete: 56.1%; Average loss: 2.9235 Iteration: 2245; Percent complete: 56.1%; Average loss: 3.0953 Iteration: 2246; Percent complete: 56.1%; Average loss: 3.1760 Iteration: 2247; Percent complete: 56.2%; Average loss: 2.9356 Iteration: 2248; Percent complete: 56.2%; Average loss: 3.1636 Iteration: 2249; Percent complete: 56.2%; Average loss: 3.0651 Iteration: 2250; Percent complete: 56.2%; Average loss: 2.8128 Iteration: 2251; Percent complete: 56.3%; Average loss: 3.2565 Iteration: 2252; Percent complete: 56.3%; Average loss: 3.3002 Iteration: 2253; Percent complete: 56.3%; Average loss: 2.8332 Iteration: 2254; Percent complete: 56.4%; Average loss: 2.9752 Iteration: 2255; Percent complete: 56.4%; Average loss: 3.2835 Iteration: 2256; Percent complete: 56.4%; Average loss: 3.2769 Iteration: 2257; Percent complete: 56.4%; Average loss: 3.1297 Iteration: 2258; Percent complete: 56.5%; Average loss: 3.2387 Iteration: 2259; Percent complete: 56.5%; Average loss: 3.1860 Iteration: 2260; Percent complete: 56.5%; Average loss: 3.0819 Iteration: 2261; Percent complete: 56.5%; Average loss: 3.1416 Iteration: 2262; Percent complete: 56.5%; Average loss: 3.2681 Iteration: 2263; Percent complete: 56.6%; Average loss: 3.1113 Iteration: 2264; Percent complete: 56.6%; Average loss: 3.0323 Iteration: 2265; Percent complete: 56.6%; Average loss: 3.3018 Iteration: 2266; Percent complete: 56.6%; Average loss: 2.7653 Iteration: 2267; Percent complete: 56.7%; Average loss: 2.9955 Iteration: 2268; Percent complete: 56.7%; Average loss: 3.0943 Iteration: 2269; Percent complete: 56.7%; Average loss: 2.9067 Iteration: 2270; Percent complete: 56.8%; Average loss: 3.1385 Iteration: 2271; Percent complete: 56.8%; Average loss: 2.8533 Iteration: 2272; Percent complete: 56.8%; Average loss: 3.1004 Iteration: 2273; Percent complete: 56.8%; Average loss: 3.0737 Iteration: 2274; Percent complete: 56.9%; Average loss: 3.2536 Iteration: 2275; Percent complete: 56.9%; Average loss: 3.1512 Iteration: 2276; Percent complete: 56.9%; Average loss: 3.0514 Iteration: 2277; Percent complete: 56.9%; Average loss: 2.9316 Iteration: 2278; Percent complete: 57.0%; Average loss: 2.7687 Iteration: 2279; Percent complete: 57.0%; Average loss: 3.2052 Iteration: 2280; Percent complete: 57.0%; Average loss: 2.8208 Iteration: 2281; Percent complete: 57.0%; Average loss: 3.0491 Iteration: 2282; Percent complete: 57.0%; Average loss: 3.0365 Iteration: 2283; Percent complete: 57.1%; Average loss: 3.2928 Iteration: 2284; Percent complete: 57.1%; Average loss: 3.0147 Iteration: 2285; Percent complete: 57.1%; Average loss: 3.1620 Iteration: 2286; Percent complete: 57.1%; Average loss: 3.3143 Iteration: 2287; Percent complete: 57.2%; Average loss: 3.2206 Iteration: 2288; Percent complete: 57.2%; Average loss: 3.1015 Iteration: 2289; Percent complete: 57.2%; Average loss: 3.1463 Iteration: 2290; Percent complete: 57.2%; Average loss: 2.9260 Iteration: 2291; Percent complete: 57.3%; Average loss: 2.9116 Iteration: 2292; Percent complete: 57.3%; Average loss: 3.1695 Iteration: 2293; Percent complete: 57.3%; Average loss: 3.2686 Iteration: 2294; Percent complete: 57.4%; Average loss: 3.1127 Iteration: 2295; Percent complete: 57.4%; Average loss: 3.0859 Iteration: 2296; Percent complete: 57.4%; Average loss: 3.2068 Iteration: 2297; Percent complete: 57.4%; Average loss: 2.9999 Iteration: 2298; Percent complete: 57.5%; Average loss: 3.2158 Iteration: 2299; Percent complete: 57.5%; Average loss: 3.0804 Iteration: 2300; Percent complete: 57.5%; Average loss: 3.0256 Iteration: 2301; Percent complete: 57.5%; Average loss: 3.1391 Iteration: 2302; Percent complete: 57.6%; Average loss: 3.0988 Iteration: 2303; Percent complete: 57.6%; Average loss: 3.1529 Iteration: 2304; Percent complete: 57.6%; Average loss: 2.9787 Iteration: 2305; Percent complete: 57.6%; Average loss: 3.2198 Iteration: 2306; Percent complete: 57.6%; Average loss: 3.1973 Iteration: 2307; Percent complete: 57.7%; Average loss: 3.0363 Iteration: 2308; Percent complete: 57.7%; Average loss: 3.1462 Iteration: 2309; Percent complete: 57.7%; Average loss: 2.8610 Iteration: 2310; Percent complete: 57.8%; Average loss: 2.9451 Iteration: 2311; Percent complete: 57.8%; Average loss: 2.8636 Iteration: 2312; Percent complete: 57.8%; Average loss: 2.9927 Iteration: 2313; Percent complete: 57.8%; Average loss: 2.8503 Iteration: 2314; Percent complete: 57.9%; Average loss: 3.1284 Iteration: 2315; Percent complete: 57.9%; Average loss: 3.0248 Iteration: 2316; Percent complete: 57.9%; Average loss: 3.3536 Iteration: 2317; Percent complete: 57.9%; Average loss: 3.0186 Iteration: 2318; Percent complete: 58.0%; Average loss: 3.1609 Iteration: 2319; Percent complete: 58.0%; Average loss: 2.9789 Iteration: 2320; Percent complete: 58.0%; Average loss: 2.8963 Iteration: 2321; Percent complete: 58.0%; Average loss: 2.8882 Iteration: 2322; Percent complete: 58.1%; Average loss: 2.9696 Iteration: 2323; Percent complete: 58.1%; Average loss: 3.1254 Iteration: 2324; Percent complete: 58.1%; Average loss: 2.8131 Iteration: 2325; Percent complete: 58.1%; Average loss: 3.1363 Iteration: 2326; Percent complete: 58.1%; Average loss: 3.0703 Iteration: 2327; Percent complete: 58.2%; Average loss: 3.0947 Iteration: 2328; Percent complete: 58.2%; Average loss: 3.0473 Iteration: 2329; Percent complete: 58.2%; Average loss: 2.8514 Iteration: 2330; Percent complete: 58.2%; Average loss: 2.9999 Iteration: 2331; Percent complete: 58.3%; Average loss: 3.0768 Iteration: 2332; Percent complete: 58.3%; Average loss: 2.9260 Iteration: 2333; Percent complete: 58.3%; Average loss: 2.9315 Iteration: 2334; Percent complete: 58.4%; Average loss: 3.0193 Iteration: 2335; Percent complete: 58.4%; Average loss: 3.1146 Iteration: 2336; Percent complete: 58.4%; Average loss: 3.0964 Iteration: 2337; Percent complete: 58.4%; Average loss: 2.9365 Iteration: 2338; Percent complete: 58.5%; Average loss: 3.1399 Iteration: 2339; Percent complete: 58.5%; Average loss: 3.0529 Iteration: 2340; Percent complete: 58.5%; Average loss: 2.7923 Iteration: 2341; Percent complete: 58.5%; Average loss: 3.2524 Iteration: 2342; Percent complete: 58.6%; Average loss: 3.2860 Iteration: 2343; Percent complete: 58.6%; Average loss: 3.0866 Iteration: 2344; Percent complete: 58.6%; Average loss: 3.2121 Iteration: 2345; Percent complete: 58.6%; Average loss: 2.9068 Iteration: 2346; Percent complete: 58.7%; Average loss: 3.0421 Iteration: 2347; Percent complete: 58.7%; Average loss: 3.3632 Iteration: 2348; Percent complete: 58.7%; Average loss: 2.7677 Iteration: 2349; Percent complete: 58.7%; Average loss: 3.0681 Iteration: 2350; Percent complete: 58.8%; Average loss: 2.8394 Iteration: 2351; Percent complete: 58.8%; Average loss: 3.1148 Iteration: 2352; Percent complete: 58.8%; Average loss: 3.1649 Iteration: 2353; Percent complete: 58.8%; Average loss: 2.8129 Iteration: 2354; Percent complete: 58.9%; Average loss: 3.1019 Iteration: 2355; Percent complete: 58.9%; Average loss: 3.0729 Iteration: 2356; Percent complete: 58.9%; Average loss: 3.1594 Iteration: 2357; Percent complete: 58.9%; Average loss: 3.1933 Iteration: 2358; Percent complete: 59.0%; Average loss: 3.3178 Iteration: 2359; Percent complete: 59.0%; Average loss: 2.8500 Iteration: 2360; Percent complete: 59.0%; Average loss: 2.8813 Iteration: 2361; Percent complete: 59.0%; Average loss: 3.2044 Iteration: 2362; Percent complete: 59.1%; Average loss: 2.8849 Iteration: 2363; Percent complete: 59.1%; Average loss: 3.0091 Iteration: 2364; Percent complete: 59.1%; Average loss: 3.0180 Iteration: 2365; Percent complete: 59.1%; Average loss: 3.2853 Iteration: 2366; Percent complete: 59.2%; Average loss: 2.9962 Iteration: 2367; Percent complete: 59.2%; Average loss: 3.0308 Iteration: 2368; Percent complete: 59.2%; Average loss: 2.9360 Iteration: 2369; Percent complete: 59.2%; Average loss: 3.0065 Iteration: 2370; Percent complete: 59.2%; Average loss: 3.0070 Iteration: 2371; Percent complete: 59.3%; Average loss: 3.1179 Iteration: 2372; Percent complete: 59.3%; Average loss: 2.8772 Iteration: 2373; Percent complete: 59.3%; Average loss: 2.8323 Iteration: 2374; Percent complete: 59.4%; Average loss: 2.8924 Iteration: 2375; Percent complete: 59.4%; Average loss: 2.9401 Iteration: 2376; Percent complete: 59.4%; Average loss: 2.8197 Iteration: 2377; Percent complete: 59.4%; Average loss: 3.1786 Iteration: 2378; Percent complete: 59.5%; Average loss: 3.1310 Iteration: 2379; Percent complete: 59.5%; Average loss: 3.0132 Iteration: 2380; Percent complete: 59.5%; Average loss: 2.9429 Iteration: 2381; Percent complete: 59.5%; Average loss: 3.0292 Iteration: 2382; Percent complete: 59.6%; Average loss: 2.7873 Iteration: 2383; Percent complete: 59.6%; Average loss: 2.8991 Iteration: 2384; Percent complete: 59.6%; Average loss: 3.2576 Iteration: 2385; Percent complete: 59.6%; Average loss: 2.9519 Iteration: 2386; Percent complete: 59.7%; Average loss: 3.1678 Iteration: 2387; Percent complete: 59.7%; Average loss: 2.9651 Iteration: 2388; Percent complete: 59.7%; Average loss: 2.8289 Iteration: 2389; Percent complete: 59.7%; Average loss: 3.1346 Iteration: 2390; Percent complete: 59.8%; Average loss: 2.9986 Iteration: 2391; Percent complete: 59.8%; Average loss: 2.8998 Iteration: 2392; Percent complete: 59.8%; Average loss: 3.1205 Iteration: 2393; Percent complete: 59.8%; Average loss: 2.9618 Iteration: 2394; Percent complete: 59.9%; Average loss: 2.9243 Iteration: 2395; Percent complete: 59.9%; Average loss: 3.1699 Iteration: 2396; Percent complete: 59.9%; Average loss: 2.9437 Iteration: 2397; Percent complete: 59.9%; Average loss: 3.1255 Iteration: 2398; Percent complete: 60.0%; Average loss: 3.0888 Iteration: 2399; Percent complete: 60.0%; Average loss: 3.1364 Iteration: 2400; Percent complete: 60.0%; Average loss: 3.2307 Iteration: 2401; Percent complete: 60.0%; Average loss: 2.9447 Iteration: 2402; Percent complete: 60.1%; Average loss: 2.9372 Iteration: 2403; Percent complete: 60.1%; Average loss: 2.8825 Iteration: 2404; Percent complete: 60.1%; Average loss: 3.0449 Iteration: 2405; Percent complete: 60.1%; Average loss: 2.9876 Iteration: 2406; Percent complete: 60.2%; Average loss: 3.0705 Iteration: 2407; Percent complete: 60.2%; Average loss: 2.8929 Iteration: 2408; Percent complete: 60.2%; Average loss: 3.1015 Iteration: 2409; Percent complete: 60.2%; Average loss: 3.0332 Iteration: 2410; Percent complete: 60.2%; Average loss: 3.1204 Iteration: 2411; Percent complete: 60.3%; Average loss: 2.9403 Iteration: 2412; Percent complete: 60.3%; Average loss: 2.8500 Iteration: 2413; Percent complete: 60.3%; Average loss: 3.0149 Iteration: 2414; Percent complete: 60.4%; Average loss: 3.0678 Iteration: 2415; Percent complete: 60.4%; Average loss: 3.1856 Iteration: 2416; Percent complete: 60.4%; Average loss: 3.0988 Iteration: 2417; Percent complete: 60.4%; Average loss: 3.3295 Iteration: 2418; Percent complete: 60.5%; Average loss: 3.3650 Iteration: 2419; Percent complete: 60.5%; Average loss: 3.4587 Iteration: 2420; Percent complete: 60.5%; Average loss: 2.8956 Iteration: 2421; Percent complete: 60.5%; Average loss: 3.0654 Iteration: 2422; Percent complete: 60.6%; Average loss: 3.1395 Iteration: 2423; Percent complete: 60.6%; Average loss: 3.1539 Iteration: 2424; Percent complete: 60.6%; Average loss: 2.7141 Iteration: 2425; Percent complete: 60.6%; Average loss: 3.1140 Iteration: 2426; Percent complete: 60.7%; Average loss: 3.0777 Iteration: 2427; Percent complete: 60.7%; Average loss: 2.9346 Iteration: 2428; Percent complete: 60.7%; Average loss: 2.9374 Iteration: 2429; Percent complete: 60.7%; Average loss: 3.3450 Iteration: 2430; Percent complete: 60.8%; Average loss: 3.0650 Iteration: 2431; Percent complete: 60.8%; Average loss: 3.1790 Iteration: 2432; Percent complete: 60.8%; Average loss: 3.0987 Iteration: 2433; Percent complete: 60.8%; Average loss: 3.1825 Iteration: 2434; Percent complete: 60.9%; Average loss: 3.1534 Iteration: 2435; Percent complete: 60.9%; Average loss: 2.8259 Iteration: 2436; Percent complete: 60.9%; Average loss: 2.5429 Iteration: 2437; Percent complete: 60.9%; Average loss: 2.8415 Iteration: 2438; Percent complete: 61.0%; Average loss: 3.0425 Iteration: 2439; Percent complete: 61.0%; Average loss: 2.7625 Iteration: 2440; Percent complete: 61.0%; Average loss: 3.1575 Iteration: 2441; Percent complete: 61.0%; Average loss: 3.3245 Iteration: 2442; Percent complete: 61.1%; Average loss: 3.2988 Iteration: 2443; Percent complete: 61.1%; Average loss: 3.0049 Iteration: 2444; Percent complete: 61.1%; Average loss: 2.8688 Iteration: 2445; Percent complete: 61.1%; Average loss: 2.6984 Iteration: 2446; Percent complete: 61.2%; Average loss: 2.9383 Iteration: 2447; Percent complete: 61.2%; Average loss: 2.9890 Iteration: 2448; Percent complete: 61.2%; Average loss: 3.2427 Iteration: 2449; Percent complete: 61.2%; Average loss: 3.0734 Iteration: 2450; Percent complete: 61.3%; Average loss: 2.9867 Iteration: 2451; Percent complete: 61.3%; Average loss: 3.4209 Iteration: 2452; Percent complete: 61.3%; Average loss: 2.8658 Iteration: 2453; Percent complete: 61.3%; Average loss: 3.2774 Iteration: 2454; Percent complete: 61.4%; Average loss: 2.9128 Iteration: 2455; Percent complete: 61.4%; Average loss: 3.1031 Iteration: 2456; Percent complete: 61.4%; Average loss: 3.1444 Iteration: 2457; Percent complete: 61.4%; Average loss: 3.0524 Iteration: 2458; Percent complete: 61.5%; Average loss: 2.9894 Iteration: 2459; Percent complete: 61.5%; Average loss: 3.1276 Iteration: 2460; Percent complete: 61.5%; Average loss: 3.0001 Iteration: 2461; Percent complete: 61.5%; Average loss: 3.0831 Iteration: 2462; Percent complete: 61.6%; Average loss: 3.1323 Iteration: 2463; Percent complete: 61.6%; Average loss: 3.0597 Iteration: 2464; Percent complete: 61.6%; Average loss: 3.1313 Iteration: 2465; Percent complete: 61.6%; Average loss: 2.8017 Iteration: 2466; Percent complete: 61.7%; Average loss: 3.3016 Iteration: 2467; Percent complete: 61.7%; Average loss: 3.3122 Iteration: 2468; Percent complete: 61.7%; Average loss: 3.1541 Iteration: 2469; Percent complete: 61.7%; Average loss: 3.1026 Iteration: 2470; Percent complete: 61.8%; Average loss: 3.0798 Iteration: 2471; Percent complete: 61.8%; Average loss: 3.1146 Iteration: 2472; Percent complete: 61.8%; Average loss: 3.0138 Iteration: 2473; Percent complete: 61.8%; Average loss: 3.2438 Iteration: 2474; Percent complete: 61.9%; Average loss: 3.2073 Iteration: 2475; Percent complete: 61.9%; Average loss: 3.2566 Iteration: 2476; Percent complete: 61.9%; Average loss: 2.8893 Iteration: 2477; Percent complete: 61.9%; Average loss: 2.9575 Iteration: 2478; Percent complete: 62.0%; Average loss: 3.0118 Iteration: 2479; Percent complete: 62.0%; Average loss: 3.1949 Iteration: 2480; Percent complete: 62.0%; Average loss: 2.9767 Iteration: 2481; Percent complete: 62.0%; Average loss: 3.0282 Iteration: 2482; Percent complete: 62.1%; Average loss: 3.0374 Iteration: 2483; Percent complete: 62.1%; Average loss: 3.0423 Iteration: 2484; Percent complete: 62.1%; Average loss: 2.9929 Iteration: 2485; Percent complete: 62.1%; Average loss: 3.0330 Iteration: 2486; Percent complete: 62.2%; Average loss: 3.0241 Iteration: 2487; Percent complete: 62.2%; Average loss: 2.7819 Iteration: 2488; Percent complete: 62.2%; Average loss: 2.9452 Iteration: 2489; Percent complete: 62.2%; Average loss: 2.9564 Iteration: 2490; Percent complete: 62.3%; Average loss: 2.9259 Iteration: 2491; Percent complete: 62.3%; Average loss: 2.8996 Iteration: 2492; Percent complete: 62.3%; Average loss: 3.2040 Iteration: 2493; Percent complete: 62.3%; Average loss: 3.3006 Iteration: 2494; Percent complete: 62.4%; Average loss: 3.0139 Iteration: 2495; Percent complete: 62.4%; Average loss: 3.2155 Iteration: 2496; Percent complete: 62.4%; Average loss: 3.1710 Iteration: 2497; Percent complete: 62.4%; Average loss: 3.0529 Iteration: 2498; Percent complete: 62.5%; Average loss: 2.7487 Iteration: 2499; Percent complete: 62.5%; Average loss: 2.9330 Iteration: 2500; Percent complete: 62.5%; Average loss: 3.1964 Iteration: 2501; Percent complete: 62.5%; Average loss: 2.9885 Iteration: 2502; Percent complete: 62.5%; Average loss: 2.9172 Iteration: 2503; Percent complete: 62.6%; Average loss: 2.9668 Iteration: 2504; Percent complete: 62.6%; Average loss: 3.1578 Iteration: 2505; Percent complete: 62.6%; Average loss: 2.8513 Iteration: 2506; Percent complete: 62.6%; Average loss: 3.2360 Iteration: 2507; Percent complete: 62.7%; Average loss: 2.9697 Iteration: 2508; Percent complete: 62.7%; Average loss: 2.9741 Iteration: 2509; Percent complete: 62.7%; Average loss: 3.0710 Iteration: 2510; Percent complete: 62.7%; Average loss: 3.0106 Iteration: 2511; Percent complete: 62.8%; Average loss: 2.9252 Iteration: 2512; Percent complete: 62.8%; Average loss: 2.9142 Iteration: 2513; Percent complete: 62.8%; Average loss: 2.9495 Iteration: 2514; Percent complete: 62.8%; Average loss: 3.0572 Iteration: 2515; Percent complete: 62.9%; Average loss: 3.0239 Iteration: 2516; Percent complete: 62.9%; Average loss: 3.1826 Iteration: 2517; Percent complete: 62.9%; Average loss: 2.7988 Iteration: 2518; Percent complete: 62.9%; Average loss: 2.9559 Iteration: 2519; Percent complete: 63.0%; Average loss: 2.9260 Iteration: 2520; Percent complete: 63.0%; Average loss: 2.8235 Iteration: 2521; Percent complete: 63.0%; Average loss: 2.8232 Iteration: 2522; Percent complete: 63.0%; Average loss: 3.0179 Iteration: 2523; Percent complete: 63.1%; Average loss: 3.2758 Iteration: 2524; Percent complete: 63.1%; Average loss: 3.1685 Iteration: 2525; Percent complete: 63.1%; Average loss: 2.9537 Iteration: 2526; Percent complete: 63.1%; Average loss: 3.0091 Iteration: 2527; Percent complete: 63.2%; Average loss: 3.1090 Iteration: 2528; Percent complete: 63.2%; Average loss: 3.1083 Iteration: 2529; Percent complete: 63.2%; Average loss: 2.9661 Iteration: 2530; Percent complete: 63.2%; Average loss: 2.9417 Iteration: 2531; Percent complete: 63.3%; Average loss: 2.9655 Iteration: 2532; Percent complete: 63.3%; Average loss: 3.0608 Iteration: 2533; Percent complete: 63.3%; Average loss: 3.0030 Iteration: 2534; Percent complete: 63.3%; Average loss: 3.0249 Iteration: 2535; Percent complete: 63.4%; Average loss: 2.9593 Iteration: 2536; Percent complete: 63.4%; Average loss: 2.9801 Iteration: 2537; Percent complete: 63.4%; Average loss: 2.9236 Iteration: 2538; Percent complete: 63.4%; Average loss: 2.8112 Iteration: 2539; Percent complete: 63.5%; Average loss: 2.9878 Iteration: 2540; Percent complete: 63.5%; Average loss: 3.2181 Iteration: 2541; Percent complete: 63.5%; Average loss: 2.7309 Iteration: 2542; Percent complete: 63.5%; Average loss: 3.0834 Iteration: 2543; Percent complete: 63.6%; Average loss: 2.9585 Iteration: 2544; Percent complete: 63.6%; Average loss: 3.1079 Iteration: 2545; Percent complete: 63.6%; Average loss: 3.0381 Iteration: 2546; Percent complete: 63.6%; Average loss: 3.1155 Iteration: 2547; Percent complete: 63.7%; Average loss: 2.8076 Iteration: 2548; Percent complete: 63.7%; Average loss: 2.8917 Iteration: 2549; Percent complete: 63.7%; Average loss: 2.9654 Iteration: 2550; Percent complete: 63.7%; Average loss: 2.8917 Iteration: 2551; Percent complete: 63.8%; Average loss: 2.9461 Iteration: 2552; Percent complete: 63.8%; Average loss: 2.8353 Iteration: 2553; Percent complete: 63.8%; Average loss: 3.0865 Iteration: 2554; Percent complete: 63.8%; Average loss: 2.8005 Iteration: 2555; Percent complete: 63.9%; Average loss: 2.7987 Iteration: 2556; Percent complete: 63.9%; Average loss: 2.8428 Iteration: 2557; Percent complete: 63.9%; Average loss: 3.1097 Iteration: 2558; Percent complete: 63.9%; Average loss: 2.9879 Iteration: 2559; Percent complete: 64.0%; Average loss: 2.8450 Iteration: 2560; Percent complete: 64.0%; Average loss: 3.0697 Iteration: 2561; Percent complete: 64.0%; Average loss: 2.9772 Iteration: 2562; Percent complete: 64.0%; Average loss: 3.0254 Iteration: 2563; Percent complete: 64.1%; Average loss: 2.7971 Iteration: 2564; Percent complete: 64.1%; Average loss: 3.0501 Iteration: 2565; Percent complete: 64.1%; Average loss: 3.0753 Iteration: 2566; Percent complete: 64.1%; Average loss: 3.0334 Iteration: 2567; Percent complete: 64.2%; Average loss: 3.0641 Iteration: 2568; Percent complete: 64.2%; Average loss: 3.1922 Iteration: 2569; Percent complete: 64.2%; Average loss: 2.8746 Iteration: 2570; Percent complete: 64.2%; Average loss: 3.0128 Iteration: 2571; Percent complete: 64.3%; Average loss: 2.8914 Iteration: 2572; Percent complete: 64.3%; Average loss: 3.0077 Iteration: 2573; Percent complete: 64.3%; Average loss: 2.7111 Iteration: 2574; Percent complete: 64.3%; Average loss: 2.8573 Iteration: 2575; Percent complete: 64.4%; Average loss: 3.1838 Iteration: 2576; Percent complete: 64.4%; Average loss: 2.9032 Iteration: 2577; Percent complete: 64.4%; Average loss: 3.1313 Iteration: 2578; Percent complete: 64.5%; Average loss: 3.1323 Iteration: 2579; Percent complete: 64.5%; Average loss: 3.2031 Iteration: 2580; Percent complete: 64.5%; Average loss: 3.1945 Iteration: 2581; Percent complete: 64.5%; Average loss: 2.9995 Iteration: 2582; Percent complete: 64.5%; Average loss: 3.0812 Iteration: 2583; Percent complete: 64.6%; Average loss: 2.8850 Iteration: 2584; Percent complete: 64.6%; Average loss: 3.1547 Iteration: 2585; Percent complete: 64.6%; Average loss: 2.8312 Iteration: 2586; Percent complete: 64.6%; Average loss: 3.3434 Iteration: 2587; Percent complete: 64.7%; Average loss: 3.2270 Iteration: 2588; Percent complete: 64.7%; Average loss: 2.9342 Iteration: 2589; Percent complete: 64.7%; Average loss: 3.0171 Iteration: 2590; Percent complete: 64.8%; Average loss: 3.1351 Iteration: 2591; Percent complete: 64.8%; Average loss: 3.0097 Iteration: 2592; Percent complete: 64.8%; Average loss: 2.9550 Iteration: 2593; Percent complete: 64.8%; Average loss: 2.9905 Iteration: 2594; Percent complete: 64.8%; Average loss: 2.9245 Iteration: 2595; Percent complete: 64.9%; Average loss: 2.9109 Iteration: 2596; Percent complete: 64.9%; Average loss: 3.1062 Iteration: 2597; Percent complete: 64.9%; Average loss: 3.2296 Iteration: 2598; Percent complete: 65.0%; Average loss: 3.0237 Iteration: 2599; Percent complete: 65.0%; Average loss: 2.8551 Iteration: 2600; Percent complete: 65.0%; Average loss: 3.0812 Iteration: 2601; Percent complete: 65.0%; Average loss: 3.0388 Iteration: 2602; Percent complete: 65.0%; Average loss: 2.7050 Iteration: 2603; Percent complete: 65.1%; Average loss: 3.1479 Iteration: 2604; Percent complete: 65.1%; Average loss: 3.1399 Iteration: 2605; Percent complete: 65.1%; Average loss: 2.9114 Iteration: 2606; Percent complete: 65.1%; Average loss: 3.1786 Iteration: 2607; Percent complete: 65.2%; Average loss: 2.7650 Iteration: 2608; Percent complete: 65.2%; Average loss: 2.9154 Iteration: 2609; Percent complete: 65.2%; Average loss: 2.9011 Iteration: 2610; Percent complete: 65.2%; Average loss: 2.8554 Iteration: 2611; Percent complete: 65.3%; Average loss: 2.7779 Iteration: 2612; Percent complete: 65.3%; Average loss: 2.7493 Iteration: 2613; Percent complete: 65.3%; Average loss: 2.9942 Iteration: 2614; Percent complete: 65.3%; Average loss: 2.8557 Iteration: 2615; Percent complete: 65.4%; Average loss: 3.1184 Iteration: 2616; Percent complete: 65.4%; Average loss: 2.9956 Iteration: 2617; Percent complete: 65.4%; Average loss: 2.7542 Iteration: 2618; Percent complete: 65.5%; Average loss: 2.8887 Iteration: 2619; Percent complete: 65.5%; Average loss: 3.1801 Iteration: 2620; Percent complete: 65.5%; Average loss: 3.0556 Iteration: 2621; Percent complete: 65.5%; Average loss: 2.9481 Iteration: 2622; Percent complete: 65.5%; Average loss: 2.8862 Iteration: 2623; Percent complete: 65.6%; Average loss: 2.9651 Iteration: 2624; Percent complete: 65.6%; Average loss: 2.7850 Iteration: 2625; Percent complete: 65.6%; Average loss: 3.1176 Iteration: 2626; Percent complete: 65.6%; Average loss: 2.7905 Iteration: 2627; Percent complete: 65.7%; Average loss: 2.9916 Iteration: 2628; Percent complete: 65.7%; Average loss: 2.9059 Iteration: 2629; Percent complete: 65.7%; Average loss: 3.0266 Iteration: 2630; Percent complete: 65.8%; Average loss: 2.7649 Iteration: 2631; Percent complete: 65.8%; Average loss: 2.5792 Iteration: 2632; Percent complete: 65.8%; Average loss: 3.0548 Iteration: 2633; Percent complete: 65.8%; Average loss: 2.8172 Iteration: 2634; Percent complete: 65.8%; Average loss: 2.8227 Iteration: 2635; Percent complete: 65.9%; Average loss: 2.9869 Iteration: 2636; Percent complete: 65.9%; Average loss: 2.8036 Iteration: 2637; Percent complete: 65.9%; Average loss: 2.8706 Iteration: 2638; Percent complete: 66.0%; Average loss: 2.8531 Iteration: 2639; Percent complete: 66.0%; Average loss: 2.8887 Iteration: 2640; Percent complete: 66.0%; Average loss: 2.8133 Iteration: 2641; Percent complete: 66.0%; Average loss: 2.8019 Iteration: 2642; Percent complete: 66.0%; Average loss: 2.9267 Iteration: 2643; Percent complete: 66.1%; Average loss: 2.8234 Iteration: 2644; Percent complete: 66.1%; Average loss: 2.4803 Iteration: 2645; Percent complete: 66.1%; Average loss: 3.2294 Iteration: 2646; Percent complete: 66.1%; Average loss: 3.0369 Iteration: 2647; Percent complete: 66.2%; Average loss: 3.0017 Iteration: 2648; Percent complete: 66.2%; Average loss: 3.0497 Iteration: 2649; Percent complete: 66.2%; Average loss: 2.8770 Iteration: 2650; Percent complete: 66.2%; Average loss: 3.1567 Iteration: 2651; Percent complete: 66.3%; Average loss: 2.7224 Iteration: 2652; Percent complete: 66.3%; Average loss: 3.0170 Iteration: 2653; Percent complete: 66.3%; Average loss: 2.9453 Iteration: 2654; Percent complete: 66.3%; Average loss: 2.8236 Iteration: 2655; Percent complete: 66.4%; Average loss: 3.0662 Iteration: 2656; Percent complete: 66.4%; Average loss: 3.0241 Iteration: 2657; Percent complete: 66.4%; Average loss: 2.9253 Iteration: 2658; Percent complete: 66.5%; Average loss: 3.0987 Iteration: 2659; Percent complete: 66.5%; Average loss: 3.0298 Iteration: 2660; Percent complete: 66.5%; Average loss: 2.9726 Iteration: 2661; Percent complete: 66.5%; Average loss: 2.9340 Iteration: 2662; Percent complete: 66.5%; Average loss: 3.0550 Iteration: 2663; Percent complete: 66.6%; Average loss: 2.9394 Iteration: 2664; Percent complete: 66.6%; Average loss: 3.2283 Iteration: 2665; Percent complete: 66.6%; Average loss: 2.7636 Iteration: 2666; Percent complete: 66.6%; Average loss: 3.0175 Iteration: 2667; Percent complete: 66.7%; Average loss: 3.0497 Iteration: 2668; Percent complete: 66.7%; Average loss: 2.9320 Iteration: 2669; Percent complete: 66.7%; Average loss: 2.8307 Iteration: 2670; Percent complete: 66.8%; Average loss: 2.9959 Iteration: 2671; Percent complete: 66.8%; Average loss: 3.0512 Iteration: 2672; Percent complete: 66.8%; Average loss: 2.8428 Iteration: 2673; Percent complete: 66.8%; Average loss: 2.9324 Iteration: 2674; Percent complete: 66.8%; Average loss: 3.2169 Iteration: 2675; Percent complete: 66.9%; Average loss: 3.1260 Iteration: 2676; Percent complete: 66.9%; Average loss: 2.6976 Iteration: 2677; Percent complete: 66.9%; Average loss: 2.5986 Iteration: 2678; Percent complete: 67.0%; Average loss: 2.9165 Iteration: 2679; Percent complete: 67.0%; Average loss: 3.2963 Iteration: 2680; Percent complete: 67.0%; Average loss: 2.9716 Iteration: 2681; Percent complete: 67.0%; Average loss: 3.0072 Iteration: 2682; Percent complete: 67.0%; Average loss: 2.8529 Iteration: 2683; Percent complete: 67.1%; Average loss: 2.8405 Iteration: 2684; Percent complete: 67.1%; Average loss: 2.8090 Iteration: 2685; Percent complete: 67.1%; Average loss: 3.0675 Iteration: 2686; Percent complete: 67.2%; Average loss: 3.0942 Iteration: 2687; Percent complete: 67.2%; Average loss: 3.0766 Iteration: 2688; Percent complete: 67.2%; Average loss: 3.1428 Iteration: 2689; Percent complete: 67.2%; Average loss: 2.9326 Iteration: 2690; Percent complete: 67.2%; Average loss: 3.0465 Iteration: 2691; Percent complete: 67.3%; Average loss: 2.9450 Iteration: 2692; Percent complete: 67.3%; Average loss: 3.0513 Iteration: 2693; Percent complete: 67.3%; Average loss: 3.0180 Iteration: 2694; Percent complete: 67.3%; Average loss: 2.5980 Iteration: 2695; Percent complete: 67.4%; Average loss: 3.0617 Iteration: 2696; Percent complete: 67.4%; Average loss: 2.6903 Iteration: 2697; Percent complete: 67.4%; Average loss: 2.8016 Iteration: 2698; Percent complete: 67.5%; Average loss: 3.1627 Iteration: 2699; Percent complete: 67.5%; Average loss: 3.0082 Iteration: 2700; Percent complete: 67.5%; Average loss: 2.9093 Iteration: 2701; Percent complete: 67.5%; Average loss: 2.8404 Iteration: 2702; Percent complete: 67.5%; Average loss: 2.9618 Iteration: 2703; Percent complete: 67.6%; Average loss: 2.9390 Iteration: 2704; Percent complete: 67.6%; Average loss: 2.9253 Iteration: 2705; Percent complete: 67.6%; Average loss: 2.6194 Iteration: 2706; Percent complete: 67.7%; Average loss: 2.9690 Iteration: 2707; Percent complete: 67.7%; Average loss: 2.9458 Iteration: 2708; Percent complete: 67.7%; Average loss: 2.9299 Iteration: 2709; Percent complete: 67.7%; Average loss: 3.1721 Iteration: 2710; Percent complete: 67.8%; Average loss: 2.9613 Iteration: 2711; Percent complete: 67.8%; Average loss: 2.9223 Iteration: 2712; Percent complete: 67.8%; Average loss: 2.9834 Iteration: 2713; Percent complete: 67.8%; Average loss: 2.9925 Iteration: 2714; Percent complete: 67.8%; Average loss: 3.0323 Iteration: 2715; Percent complete: 67.9%; Average loss: 2.8240 Iteration: 2716; Percent complete: 67.9%; Average loss: 3.1136 Iteration: 2717; Percent complete: 67.9%; Average loss: 2.8045 Iteration: 2718; Percent complete: 68.0%; Average loss: 2.8915 Iteration: 2719; Percent complete: 68.0%; Average loss: 2.8578 Iteration: 2720; Percent complete: 68.0%; Average loss: 2.7975 Iteration: 2721; Percent complete: 68.0%; Average loss: 3.1189 Iteration: 2722; Percent complete: 68.0%; Average loss: 2.8389 Iteration: 2723; Percent complete: 68.1%; Average loss: 3.0273 Iteration: 2724; Percent complete: 68.1%; Average loss: 2.9514 Iteration: 2725; Percent complete: 68.1%; Average loss: 2.8473 Iteration: 2726; Percent complete: 68.2%; Average loss: 3.0585 Iteration: 2727; Percent complete: 68.2%; Average loss: 3.1688 Iteration: 2728; Percent complete: 68.2%; Average loss: 2.9324 Iteration: 2729; Percent complete: 68.2%; Average loss: 2.9646 Iteration: 2730; Percent complete: 68.2%; Average loss: 3.0837 Iteration: 2731; Percent complete: 68.3%; Average loss: 2.8790 Iteration: 2732; Percent complete: 68.3%; Average loss: 2.9632 Iteration: 2733; Percent complete: 68.3%; Average loss: 3.0069 Iteration: 2734; Percent complete: 68.3%; Average loss: 2.9373 Iteration: 2735; Percent complete: 68.4%; Average loss: 2.9020 Iteration: 2736; Percent complete: 68.4%; Average loss: 2.8234 Iteration: 2737; Percent complete: 68.4%; Average loss: 2.9198 Iteration: 2738; Percent complete: 68.5%; Average loss: 2.6382 Iteration: 2739; Percent complete: 68.5%; Average loss: 2.8730 Iteration: 2740; Percent complete: 68.5%; Average loss: 2.9483 Iteration: 2741; Percent complete: 68.5%; Average loss: 2.9039 Iteration: 2742; Percent complete: 68.5%; Average loss: 2.9583 Iteration: 2743; Percent complete: 68.6%; Average loss: 2.9673 Iteration: 2744; Percent complete: 68.6%; Average loss: 2.9009 Iteration: 2745; Percent complete: 68.6%; Average loss: 2.6465 Iteration: 2746; Percent complete: 68.7%; Average loss: 2.7664 Iteration: 2747; Percent complete: 68.7%; Average loss: 2.8858 Iteration: 2748; Percent complete: 68.7%; Average loss: 2.8898 Iteration: 2749; Percent complete: 68.7%; Average loss: 2.5949 Iteration: 2750; Percent complete: 68.8%; Average loss: 2.8482 Iteration: 2751; Percent complete: 68.8%; Average loss: 2.8060 Iteration: 2752; Percent complete: 68.8%; Average loss: 3.0663 Iteration: 2753; Percent complete: 68.8%; Average loss: 2.8817 Iteration: 2754; Percent complete: 68.8%; Average loss: 2.9042 Iteration: 2755; Percent complete: 68.9%; Average loss: 2.7755 Iteration: 2756; Percent complete: 68.9%; Average loss: 2.9281 Iteration: 2757; Percent complete: 68.9%; Average loss: 3.0022 Iteration: 2758; Percent complete: 69.0%; Average loss: 2.8429 Iteration: 2759; Percent complete: 69.0%; Average loss: 2.9047 Iteration: 2760; Percent complete: 69.0%; Average loss: 3.1212 Iteration: 2761; Percent complete: 69.0%; Average loss: 3.0544 Iteration: 2762; Percent complete: 69.0%; Average loss: 3.0083 Iteration: 2763; Percent complete: 69.1%; Average loss: 3.1533 Iteration: 2764; Percent complete: 69.1%; Average loss: 2.8782 Iteration: 2765; Percent complete: 69.1%; Average loss: 2.9744 Iteration: 2766; Percent complete: 69.2%; Average loss: 2.9670 Iteration: 2767; Percent complete: 69.2%; Average loss: 2.8762 Iteration: 2768; Percent complete: 69.2%; Average loss: 3.0126 Iteration: 2769; Percent complete: 69.2%; Average loss: 2.7328 Iteration: 2770; Percent complete: 69.2%; Average loss: 3.0325 Iteration: 2771; Percent complete: 69.3%; Average loss: 2.8755 Iteration: 2772; Percent complete: 69.3%; Average loss: 2.6738 Iteration: 2773; Percent complete: 69.3%; Average loss: 2.9788 Iteration: 2774; Percent complete: 69.3%; Average loss: 2.6094 Iteration: 2775; Percent complete: 69.4%; Average loss: 2.9523 Iteration: 2776; Percent complete: 69.4%; Average loss: 2.4378 Iteration: 2777; Percent complete: 69.4%; Average loss: 2.9802 Iteration: 2778; Percent complete: 69.5%; Average loss: 2.6437 Iteration: 2779; Percent complete: 69.5%; Average loss: 2.9313 Iteration: 2780; Percent complete: 69.5%; Average loss: 3.0637 Iteration: 2781; Percent complete: 69.5%; Average loss: 2.9021 Iteration: 2782; Percent complete: 69.5%; Average loss: 2.7970 Iteration: 2783; Percent complete: 69.6%; Average loss: 2.7807 Iteration: 2784; Percent complete: 69.6%; Average loss: 2.9531 Iteration: 2785; Percent complete: 69.6%; Average loss: 2.8378 Iteration: 2786; Percent complete: 69.7%; Average loss: 2.9343 Iteration: 2787; Percent complete: 69.7%; Average loss: 3.1652 Iteration: 2788; Percent complete: 69.7%; Average loss: 3.0891 Iteration: 2789; Percent complete: 69.7%; Average loss: 2.8231 Iteration: 2790; Percent complete: 69.8%; Average loss: 2.9803 Iteration: 2791; Percent complete: 69.8%; Average loss: 2.7997 Iteration: 2792; Percent complete: 69.8%; Average loss: 2.8290 Iteration: 2793; Percent complete: 69.8%; Average loss: 2.8729 Iteration: 2794; Percent complete: 69.8%; Average loss: 2.6495 Iteration: 2795; Percent complete: 69.9%; Average loss: 2.7845 Iteration: 2796; Percent complete: 69.9%; Average loss: 2.9123 Iteration: 2797; Percent complete: 69.9%; Average loss: 2.6587 Iteration: 2798; Percent complete: 70.0%; Average loss: 2.9792 Iteration: 2799; Percent complete: 70.0%; Average loss: 2.7907 Iteration: 2800; Percent complete: 70.0%; Average loss: 2.9879 Iteration: 2801; Percent complete: 70.0%; Average loss: 3.0137 Iteration: 2802; Percent complete: 70.0%; Average loss: 2.8479 Iteration: 2803; Percent complete: 70.1%; Average loss: 2.9182 Iteration: 2804; Percent complete: 70.1%; Average loss: 3.0617 Iteration: 2805; Percent complete: 70.1%; Average loss: 2.7200 Iteration: 2806; Percent complete: 70.2%; Average loss: 3.0797 Iteration: 2807; Percent complete: 70.2%; Average loss: 3.0277 Iteration: 2808; Percent complete: 70.2%; Average loss: 2.8043 Iteration: 2809; Percent complete: 70.2%; Average loss: 2.9188 Iteration: 2810; Percent complete: 70.2%; Average loss: 3.1089 Iteration: 2811; Percent complete: 70.3%; Average loss: 2.9853 Iteration: 2812; Percent complete: 70.3%; Average loss: 2.8999 Iteration: 2813; Percent complete: 70.3%; Average loss: 2.8316 Iteration: 2814; Percent complete: 70.3%; Average loss: 3.2796 Iteration: 2815; Percent complete: 70.4%; Average loss: 2.7678 Iteration: 2816; Percent complete: 70.4%; Average loss: 2.9154 Iteration: 2817; Percent complete: 70.4%; Average loss: 3.0871 Iteration: 2818; Percent complete: 70.5%; Average loss: 2.8267 Iteration: 2819; Percent complete: 70.5%; Average loss: 3.0226 Iteration: 2820; Percent complete: 70.5%; Average loss: 2.8605 Iteration: 2821; Percent complete: 70.5%; Average loss: 3.1640 Iteration: 2822; Percent complete: 70.5%; Average loss: 2.9713 Iteration: 2823; Percent complete: 70.6%; Average loss: 2.9636 Iteration: 2824; Percent complete: 70.6%; Average loss: 2.7898 Iteration: 2825; Percent complete: 70.6%; Average loss: 3.1515 Iteration: 2826; Percent complete: 70.7%; Average loss: 2.8141 Iteration: 2827; Percent complete: 70.7%; Average loss: 3.0055 Iteration: 2828; Percent complete: 70.7%; Average loss: 2.9830 Iteration: 2829; Percent complete: 70.7%; Average loss: 2.8885 Iteration: 2830; Percent complete: 70.8%; Average loss: 2.7592 Iteration: 2831; Percent complete: 70.8%; Average loss: 2.9144 Iteration: 2832; Percent complete: 70.8%; Average loss: 2.9843 Iteration: 2833; Percent complete: 70.8%; Average loss: 2.9086 Iteration: 2834; Percent complete: 70.9%; Average loss: 2.9823 Iteration: 2835; Percent complete: 70.9%; Average loss: 2.7780 Iteration: 2836; Percent complete: 70.9%; Average loss: 3.2858 Iteration: 2837; Percent complete: 70.9%; Average loss: 2.8176 Iteration: 2838; Percent complete: 71.0%; Average loss: 2.6375 Iteration: 2839; Percent complete: 71.0%; Average loss: 3.1392 Iteration: 2840; Percent complete: 71.0%; Average loss: 2.8321 Iteration: 2841; Percent complete: 71.0%; Average loss: 3.1662 Iteration: 2842; Percent complete: 71.0%; Average loss: 2.9526 Iteration: 2843; Percent complete: 71.1%; Average loss: 2.8051 Iteration: 2844; Percent complete: 71.1%; Average loss: 2.7291 Iteration: 2845; Percent complete: 71.1%; Average loss: 2.7137 Iteration: 2846; Percent complete: 71.2%; Average loss: 2.9205 Iteration: 2847; Percent complete: 71.2%; Average loss: 2.6413 Iteration: 2848; Percent complete: 71.2%; Average loss: 2.9498 Iteration: 2849; Percent complete: 71.2%; Average loss: 2.9866 Iteration: 2850; Percent complete: 71.2%; Average loss: 2.9668 Iteration: 2851; Percent complete: 71.3%; Average loss: 2.6714 Iteration: 2852; Percent complete: 71.3%; Average loss: 2.6903 Iteration: 2853; Percent complete: 71.3%; Average loss: 2.8646 Iteration: 2854; Percent complete: 71.4%; Average loss: 2.9180 Iteration: 2855; Percent complete: 71.4%; Average loss: 2.9489 Iteration: 2856; Percent complete: 71.4%; Average loss: 2.8151 Iteration: 2857; Percent complete: 71.4%; Average loss: 2.8350 Iteration: 2858; Percent complete: 71.5%; Average loss: 2.9194 Iteration: 2859; Percent complete: 71.5%; Average loss: 2.7988 Iteration: 2860; Percent complete: 71.5%; Average loss: 2.7940 Iteration: 2861; Percent complete: 71.5%; Average loss: 2.9171 Iteration: 2862; Percent complete: 71.5%; Average loss: 2.8579 Iteration: 2863; Percent complete: 71.6%; Average loss: 2.9982 Iteration: 2864; Percent complete: 71.6%; Average loss: 2.7110 Iteration: 2865; Percent complete: 71.6%; Average loss: 2.9533 Iteration: 2866; Percent complete: 71.7%; Average loss: 3.1463 Iteration: 2867; Percent complete: 71.7%; Average loss: 2.6640 Iteration: 2868; Percent complete: 71.7%; Average loss: 3.0929 Iteration: 2869; Percent complete: 71.7%; Average loss: 3.1866 Iteration: 2870; Percent complete: 71.8%; Average loss: 2.7620 Iteration: 2871; Percent complete: 71.8%; Average loss: 2.8780 Iteration: 2872; Percent complete: 71.8%; Average loss: 2.9050 Iteration: 2873; Percent complete: 71.8%; Average loss: 2.8141 Iteration: 2874; Percent complete: 71.9%; Average loss: 2.7583 Iteration: 2875; Percent complete: 71.9%; Average loss: 2.8618 Iteration: 2876; Percent complete: 71.9%; Average loss: 2.9433 Iteration: 2877; Percent complete: 71.9%; Average loss: 2.9190 Iteration: 2878; Percent complete: 72.0%; Average loss: 2.9234 Iteration: 2879; Percent complete: 72.0%; Average loss: 2.8074 Iteration: 2880; Percent complete: 72.0%; Average loss: 3.0852 Iteration: 2881; Percent complete: 72.0%; Average loss: 2.8919 Iteration: 2882; Percent complete: 72.0%; Average loss: 2.9235 Iteration: 2883; Percent complete: 72.1%; Average loss: 2.8512 Iteration: 2884; Percent complete: 72.1%; Average loss: 2.6472 Iteration: 2885; Percent complete: 72.1%; Average loss: 2.7590 Iteration: 2886; Percent complete: 72.2%; Average loss: 3.1383 Iteration: 2887; Percent complete: 72.2%; Average loss: 3.1379 Iteration: 2888; Percent complete: 72.2%; Average loss: 3.0055 Iteration: 2889; Percent complete: 72.2%; Average loss: 2.9380 Iteration: 2890; Percent complete: 72.2%; Average loss: 2.7017 Iteration: 2891; Percent complete: 72.3%; Average loss: 2.9404 Iteration: 2892; Percent complete: 72.3%; Average loss: 3.1636 Iteration: 2893; Percent complete: 72.3%; Average loss: 2.8021 Iteration: 2894; Percent complete: 72.4%; Average loss: 2.9200 Iteration: 2895; Percent complete: 72.4%; Average loss: 3.0321 Iteration: 2896; Percent complete: 72.4%; Average loss: 2.7160 Iteration: 2897; Percent complete: 72.4%; Average loss: 2.8483 Iteration: 2898; Percent complete: 72.5%; Average loss: 2.8024 Iteration: 2899; Percent complete: 72.5%; Average loss: 2.8040 Iteration: 2900; Percent complete: 72.5%; Average loss: 2.6440 Iteration: 2901; Percent complete: 72.5%; Average loss: 2.8726 Iteration: 2902; Percent complete: 72.5%; Average loss: 2.9690 Iteration: 2903; Percent complete: 72.6%; Average loss: 2.8363 Iteration: 2904; Percent complete: 72.6%; Average loss: 2.8068 Iteration: 2905; Percent complete: 72.6%; Average loss: 2.9571 Iteration: 2906; Percent complete: 72.7%; Average loss: 2.8265 Iteration: 2907; Percent complete: 72.7%; Average loss: 2.8911 Iteration: 2908; Percent complete: 72.7%; Average loss: 2.8181 Iteration: 2909; Percent complete: 72.7%; Average loss: 2.8996 Iteration: 2910; Percent complete: 72.8%; Average loss: 3.1019 Iteration: 2911; Percent complete: 72.8%; Average loss: 2.7923 Iteration: 2912; Percent complete: 72.8%; Average loss: 2.9525 Iteration: 2913; Percent complete: 72.8%; Average loss: 2.8188 Iteration: 2914; Percent complete: 72.9%; Average loss: 2.8698 Iteration: 2915; Percent complete: 72.9%; Average loss: 2.6133 Iteration: 2916; Percent complete: 72.9%; Average loss: 2.8489 Iteration: 2917; Percent complete: 72.9%; Average loss: 3.0544 Iteration: 2918; Percent complete: 73.0%; Average loss: 2.8843 Iteration: 2919; Percent complete: 73.0%; Average loss: 2.8208 Iteration: 2920; Percent complete: 73.0%; Average loss: 3.0016 Iteration: 2921; Percent complete: 73.0%; Average loss: 2.8152 Iteration: 2922; Percent complete: 73.0%; Average loss: 2.9246 Iteration: 2923; Percent complete: 73.1%; Average loss: 3.0210 Iteration: 2924; Percent complete: 73.1%; Average loss: 2.9615 Iteration: 2925; Percent complete: 73.1%; Average loss: 2.8072 Iteration: 2926; Percent complete: 73.2%; Average loss: 2.7693 Iteration: 2927; Percent complete: 73.2%; Average loss: 2.9945 Iteration: 2928; Percent complete: 73.2%; Average loss: 2.7840 Iteration: 2929; Percent complete: 73.2%; Average loss: 2.9681 Iteration: 2930; Percent complete: 73.2%; Average loss: 2.7637 Iteration: 2931; Percent complete: 73.3%; Average loss: 2.6554 Iteration: 2932; Percent complete: 73.3%; Average loss: 2.9603 Iteration: 2933; Percent complete: 73.3%; Average loss: 2.9346 Iteration: 2934; Percent complete: 73.4%; Average loss: 2.9159 Iteration: 2935; Percent complete: 73.4%; Average loss: 2.8674 Iteration: 2936; Percent complete: 73.4%; Average loss: 2.7993 Iteration: 2937; Percent complete: 73.4%; Average loss: 2.8108 Iteration: 2938; Percent complete: 73.5%; Average loss: 2.8300 Iteration: 2939; Percent complete: 73.5%; Average loss: 2.9156 Iteration: 2940; Percent complete: 73.5%; Average loss: 2.9242 Iteration: 2941; Percent complete: 73.5%; Average loss: 2.5185 Iteration: 2942; Percent complete: 73.6%; Average loss: 2.8141 Iteration: 2943; Percent complete: 73.6%; Average loss: 3.0472 Iteration: 2944; Percent complete: 73.6%; Average loss: 2.7164 Iteration: 2945; Percent complete: 73.6%; Average loss: 3.1620 Iteration: 2946; Percent complete: 73.7%; Average loss: 2.8157 Iteration: 2947; Percent complete: 73.7%; Average loss: 2.8664 Iteration: 2948; Percent complete: 73.7%; Average loss: 2.8090 Iteration: 2949; Percent complete: 73.7%; Average loss: 2.7159 Iteration: 2950; Percent complete: 73.8%; Average loss: 2.9718 Iteration: 2951; Percent complete: 73.8%; Average loss: 2.8828 Iteration: 2952; Percent complete: 73.8%; Average loss: 3.0034 Iteration: 2953; Percent complete: 73.8%; Average loss: 2.9863 Iteration: 2954; Percent complete: 73.9%; Average loss: 3.0323 Iteration: 2955; Percent complete: 73.9%; Average loss: 2.8616 Iteration: 2956; Percent complete: 73.9%; Average loss: 3.1869 Iteration: 2957; Percent complete: 73.9%; Average loss: 2.9385 Iteration: 2958; Percent complete: 74.0%; Average loss: 2.7730 Iteration: 2959; Percent complete: 74.0%; Average loss: 2.7901 Iteration: 2960; Percent complete: 74.0%; Average loss: 2.7923 Iteration: 2961; Percent complete: 74.0%; Average loss: 3.0805 Iteration: 2962; Percent complete: 74.1%; Average loss: 2.9755 Iteration: 2963; Percent complete: 74.1%; Average loss: 2.8568 Iteration: 2964; Percent complete: 74.1%; Average loss: 2.8960 Iteration: 2965; Percent complete: 74.1%; Average loss: 3.1964 Iteration: 2966; Percent complete: 74.2%; Average loss: 2.9653 Iteration: 2967; Percent complete: 74.2%; Average loss: 2.7962 Iteration: 2968; Percent complete: 74.2%; Average loss: 2.9979 Iteration: 2969; Percent complete: 74.2%; Average loss: 2.9130 Iteration: 2970; Percent complete: 74.2%; Average loss: 2.6703 Iteration: 2971; Percent complete: 74.3%; Average loss: 2.8222 Iteration: 2972; Percent complete: 74.3%; Average loss: 3.0154 Iteration: 2973; Percent complete: 74.3%; Average loss: 3.0817 Iteration: 2974; Percent complete: 74.4%; Average loss: 3.2346 Iteration: 2975; Percent complete: 74.4%; Average loss: 2.8547 Iteration: 2976; Percent complete: 74.4%; Average loss: 2.6604 Iteration: 2977; Percent complete: 74.4%; Average loss: 2.7886 Iteration: 2978; Percent complete: 74.5%; Average loss: 3.0566 Iteration: 2979; Percent complete: 74.5%; Average loss: 2.7710 Iteration: 2980; Percent complete: 74.5%; Average loss: 2.9509 Iteration: 2981; Percent complete: 74.5%; Average loss: 2.9035 Iteration: 2982; Percent complete: 74.6%; Average loss: 2.7042 Iteration: 2983; Percent complete: 74.6%; Average loss: 2.6212 Iteration: 2984; Percent complete: 74.6%; Average loss: 2.8992 Iteration: 2985; Percent complete: 74.6%; Average loss: 2.7427 Iteration: 2986; Percent complete: 74.7%; Average loss: 2.9508 Iteration: 2987; Percent complete: 74.7%; Average loss: 3.0246 Iteration: 2988; Percent complete: 74.7%; Average loss: 2.8309 Iteration: 2989; Percent complete: 74.7%; Average loss: 2.6856 Iteration: 2990; Percent complete: 74.8%; Average loss: 2.8745 Iteration: 2991; Percent complete: 74.8%; Average loss: 3.0020 Iteration: 2992; Percent complete: 74.8%; Average loss: 2.9342 Iteration: 2993; Percent complete: 74.8%; Average loss: 2.8722 Iteration: 2994; Percent complete: 74.9%; Average loss: 2.8705 Iteration: 2995; Percent complete: 74.9%; Average loss: 2.9994 Iteration: 2996; Percent complete: 74.9%; Average loss: 2.8886 Iteration: 2997; Percent complete: 74.9%; Average loss: 2.6228 Iteration: 2998; Percent complete: 75.0%; Average loss: 2.9396 Iteration: 2999; Percent complete: 75.0%; Average loss: 2.8399 Iteration: 3000; Percent complete: 75.0%; Average loss: 2.9457 Iteration: 3001; Percent complete: 75.0%; Average loss: 2.9484 Iteration: 3002; Percent complete: 75.0%; Average loss: 2.6575 Iteration: 3003; Percent complete: 75.1%; Average loss: 2.8201 Iteration: 3004; Percent complete: 75.1%; Average loss: 3.1596 Iteration: 3005; Percent complete: 75.1%; Average loss: 2.6805 Iteration: 3006; Percent complete: 75.1%; Average loss: 2.9002 Iteration: 3007; Percent complete: 75.2%; Average loss: 2.9438 Iteration: 3008; Percent complete: 75.2%; Average loss: 2.8050 Iteration: 3009; Percent complete: 75.2%; Average loss: 2.8816 Iteration: 3010; Percent complete: 75.2%; Average loss: 2.6148 Iteration: 3011; Percent complete: 75.3%; Average loss: 2.7721 Iteration: 3012; Percent complete: 75.3%; Average loss: 3.0331 Iteration: 3013; Percent complete: 75.3%; Average loss: 2.9468 Iteration: 3014; Percent complete: 75.3%; Average loss: 2.8168 Iteration: 3015; Percent complete: 75.4%; Average loss: 2.9918 Iteration: 3016; Percent complete: 75.4%; Average loss: 2.8244 Iteration: 3017; Percent complete: 75.4%; Average loss: 2.8126 Iteration: 3018; Percent complete: 75.4%; Average loss: 2.9709 Iteration: 3019; Percent complete: 75.5%; Average loss: 2.7138 Iteration: 3020; Percent complete: 75.5%; Average loss: 2.8506 Iteration: 3021; Percent complete: 75.5%; Average loss: 2.9896 Iteration: 3022; Percent complete: 75.5%; Average loss: 3.0187 Iteration: 3023; Percent complete: 75.6%; Average loss: 2.9167 Iteration: 3024; Percent complete: 75.6%; Average loss: 2.9831 Iteration: 3025; Percent complete: 75.6%; Average loss: 2.7473 Iteration: 3026; Percent complete: 75.6%; Average loss: 2.9938 Iteration: 3027; Percent complete: 75.7%; Average loss: 2.8059 Iteration: 3028; Percent complete: 75.7%; Average loss: 2.8428 Iteration: 3029; Percent complete: 75.7%; Average loss: 2.8500 Iteration: 3030; Percent complete: 75.8%; Average loss: 2.7381 Iteration: 3031; Percent complete: 75.8%; Average loss: 2.7531 Iteration: 3032; Percent complete: 75.8%; Average loss: 2.8268 Iteration: 3033; Percent complete: 75.8%; Average loss: 3.1124 Iteration: 3034; Percent complete: 75.8%; Average loss: 2.7830 Iteration: 3035; Percent complete: 75.9%; Average loss: 3.0254 Iteration: 3036; Percent complete: 75.9%; Average loss: 2.8651 Iteration: 3037; Percent complete: 75.9%; Average loss: 2.8221 Iteration: 3038; Percent complete: 75.9%; Average loss: 2.7592 Iteration: 3039; Percent complete: 76.0%; Average loss: 2.7877 Iteration: 3040; Percent complete: 76.0%; Average loss: 2.8444 Iteration: 3041; Percent complete: 76.0%; Average loss: 2.9705 Iteration: 3042; Percent complete: 76.0%; Average loss: 2.6934 Iteration: 3043; Percent complete: 76.1%; Average loss: 2.6572 Iteration: 3044; Percent complete: 76.1%; Average loss: 2.7704 Iteration: 3045; Percent complete: 76.1%; Average loss: 2.8687 Iteration: 3046; Percent complete: 76.1%; Average loss: 2.9330 Iteration: 3047; Percent complete: 76.2%; Average loss: 2.7703 Iteration: 3048; Percent complete: 76.2%; Average loss: 2.7719 Iteration: 3049; Percent complete: 76.2%; Average loss: 2.6886 Iteration: 3050; Percent complete: 76.2%; Average loss: 2.6986 Iteration: 3051; Percent complete: 76.3%; Average loss: 2.7390 Iteration: 3052; Percent complete: 76.3%; Average loss: 2.8291 Iteration: 3053; Percent complete: 76.3%; Average loss: 2.9737 Iteration: 3054; Percent complete: 76.3%; Average loss: 2.4393 Iteration: 3055; Percent complete: 76.4%; Average loss: 3.0142 Iteration: 3056; Percent complete: 76.4%; Average loss: 2.9420 Iteration: 3057; Percent complete: 76.4%; Average loss: 2.8674 Iteration: 3058; Percent complete: 76.4%; Average loss: 2.9913 Iteration: 3059; Percent complete: 76.5%; Average loss: 2.9445 Iteration: 3060; Percent complete: 76.5%; Average loss: 2.7795 Iteration: 3061; Percent complete: 76.5%; Average loss: 2.7676 Iteration: 3062; Percent complete: 76.5%; Average loss: 2.9271 Iteration: 3063; Percent complete: 76.6%; Average loss: 2.7686 Iteration: 3064; Percent complete: 76.6%; Average loss: 2.6827 Iteration: 3065; Percent complete: 76.6%; Average loss: 2.8534 Iteration: 3066; Percent complete: 76.6%; Average loss: 2.9248 Iteration: 3067; Percent complete: 76.7%; Average loss: 2.7642 Iteration: 3068; Percent complete: 76.7%; Average loss: 2.6182 Iteration: 3069; Percent complete: 76.7%; Average loss: 2.7628 Iteration: 3070; Percent complete: 76.8%; Average loss: 2.7135 Iteration: 3071; Percent complete: 76.8%; Average loss: 2.9116 Iteration: 3072; Percent complete: 76.8%; Average loss: 2.6462 Iteration: 3073; Percent complete: 76.8%; Average loss: 2.8572 Iteration: 3074; Percent complete: 76.8%; Average loss: 2.9380 Iteration: 3075; Percent complete: 76.9%; Average loss: 2.6981 Iteration: 3076; Percent complete: 76.9%; Average loss: 2.7935 Iteration: 3077; Percent complete: 76.9%; Average loss: 2.7862 Iteration: 3078; Percent complete: 77.0%; Average loss: 2.9882 Iteration: 3079; Percent complete: 77.0%; Average loss: 2.6544 Iteration: 3080; Percent complete: 77.0%; Average loss: 2.7847 Iteration: 3081; Percent complete: 77.0%; Average loss: 2.8767 Iteration: 3082; Percent complete: 77.0%; Average loss: 2.8983 Iteration: 3083; Percent complete: 77.1%; Average loss: 2.6355 Iteration: 3084; Percent complete: 77.1%; Average loss: 2.9501 Iteration: 3085; Percent complete: 77.1%; Average loss: 2.9698 Iteration: 3086; Percent complete: 77.1%; Average loss: 2.9885 Iteration: 3087; Percent complete: 77.2%; Average loss: 2.5848 Iteration: 3088; Percent complete: 77.2%; Average loss: 2.7780 Iteration: 3089; Percent complete: 77.2%; Average loss: 2.9579 Iteration: 3090; Percent complete: 77.2%; Average loss: 2.5658 Iteration: 3091; Percent complete: 77.3%; Average loss: 2.9736 Iteration: 3092; Percent complete: 77.3%; Average loss: 2.8170 Iteration: 3093; Percent complete: 77.3%; Average loss: 2.9494 Iteration: 3094; Percent complete: 77.3%; Average loss: 3.0026 Iteration: 3095; Percent complete: 77.4%; Average loss: 2.7646 Iteration: 3096; Percent complete: 77.4%; Average loss: 2.8412 Iteration: 3097; Percent complete: 77.4%; Average loss: 3.0414 Iteration: 3098; Percent complete: 77.5%; Average loss: 2.8056 Iteration: 3099; Percent complete: 77.5%; Average loss: 2.8410 Iteration: 3100; Percent complete: 77.5%; Average loss: 3.0835 Iteration: 3101; Percent complete: 77.5%; Average loss: 2.9414 Iteration: 3102; Percent complete: 77.5%; Average loss: 2.8052 Iteration: 3103; Percent complete: 77.6%; Average loss: 2.8541 Iteration: 3104; Percent complete: 77.6%; Average loss: 2.9097 Iteration: 3105; Percent complete: 77.6%; Average loss: 2.9029 Iteration: 3106; Percent complete: 77.6%; Average loss: 2.7344 Iteration: 3107; Percent complete: 77.7%; Average loss: 2.8429 Iteration: 3108; Percent complete: 77.7%; Average loss: 3.0851 Iteration: 3109; Percent complete: 77.7%; Average loss: 2.8437 Iteration: 3110; Percent complete: 77.8%; Average loss: 2.7722 Iteration: 3111; Percent complete: 77.8%; Average loss: 2.6934 Iteration: 3112; Percent complete: 77.8%; Average loss: 2.7802 Iteration: 3113; Percent complete: 77.8%; Average loss: 3.3085 Iteration: 3114; Percent complete: 77.8%; Average loss: 2.8029 Iteration: 3115; Percent complete: 77.9%; Average loss: 2.9788 Iteration: 3116; Percent complete: 77.9%; Average loss: 2.8633 Iteration: 3117; Percent complete: 77.9%; Average loss: 2.8432 Iteration: 3118; Percent complete: 78.0%; Average loss: 2.8758 Iteration: 3119; Percent complete: 78.0%; Average loss: 2.6448 Iteration: 3120; Percent complete: 78.0%; Average loss: 2.8470 Iteration: 3121; Percent complete: 78.0%; Average loss: 2.9565 Iteration: 3122; Percent complete: 78.0%; Average loss: 3.0055 Iteration: 3123; Percent complete: 78.1%; Average loss: 2.7990 Iteration: 3124; Percent complete: 78.1%; Average loss: 2.8387 Iteration: 3125; Percent complete: 78.1%; Average loss: 3.0499 Iteration: 3126; Percent complete: 78.1%; Average loss: 2.7635 Iteration: 3127; Percent complete: 78.2%; Average loss: 2.8165 Iteration: 3128; Percent complete: 78.2%; Average loss: 2.8583 Iteration: 3129; Percent complete: 78.2%; Average loss: 2.9489 Iteration: 3130; Percent complete: 78.2%; Average loss: 2.6549 Iteration: 3131; Percent complete: 78.3%; Average loss: 2.6185 Iteration: 3132; Percent complete: 78.3%; Average loss: 2.8043 Iteration: 3133; Percent complete: 78.3%; Average loss: 2.7446 Iteration: 3134; Percent complete: 78.3%; Average loss: 2.7520 Iteration: 3135; Percent complete: 78.4%; Average loss: 2.8942 Iteration: 3136; Percent complete: 78.4%; Average loss: 2.6640 Iteration: 3137; Percent complete: 78.4%; Average loss: 2.7117 Iteration: 3138; Percent complete: 78.5%; Average loss: 2.9687 Iteration: 3139; Percent complete: 78.5%; Average loss: 2.9050 Iteration: 3140; Percent complete: 78.5%; Average loss: 2.9184 Iteration: 3141; Percent complete: 78.5%; Average loss: 2.7645 Iteration: 3142; Percent complete: 78.5%; Average loss: 2.9726 Iteration: 3143; Percent complete: 78.6%; Average loss: 2.9041 Iteration: 3144; Percent complete: 78.6%; Average loss: 2.8911 Iteration: 3145; Percent complete: 78.6%; Average loss: 2.8123 Iteration: 3146; Percent complete: 78.6%; Average loss: 2.8209 Iteration: 3147; Percent complete: 78.7%; Average loss: 2.9752 Iteration: 3148; Percent complete: 78.7%; Average loss: 2.8241 Iteration: 3149; Percent complete: 78.7%; Average loss: 2.7952 Iteration: 3150; Percent complete: 78.8%; Average loss: 2.9187 Iteration: 3151; Percent complete: 78.8%; Average loss: 2.5914 Iteration: 3152; Percent complete: 78.8%; Average loss: 2.7047 Iteration: 3153; Percent complete: 78.8%; Average loss: 2.8796 Iteration: 3154; Percent complete: 78.8%; Average loss: 2.8054 Iteration: 3155; Percent complete: 78.9%; Average loss: 2.8857 Iteration: 3156; Percent complete: 78.9%; Average loss: 2.7118 Iteration: 3157; Percent complete: 78.9%; Average loss: 2.7414 Iteration: 3158; Percent complete: 79.0%; Average loss: 2.7065 Iteration: 3159; Percent complete: 79.0%; Average loss: 2.6843 Iteration: 3160; Percent complete: 79.0%; Average loss: 2.9572 Iteration: 3161; Percent complete: 79.0%; Average loss: 2.7692 Iteration: 3162; Percent complete: 79.0%; Average loss: 2.6625 Iteration: 3163; Percent complete: 79.1%; Average loss: 2.7064 Iteration: 3164; Percent complete: 79.1%; Average loss: 2.9864 Iteration: 3165; Percent complete: 79.1%; Average loss: 2.8739 Iteration: 3166; Percent complete: 79.1%; Average loss: 2.6895 Iteration: 3167; Percent complete: 79.2%; Average loss: 2.6778 Iteration: 3168; Percent complete: 79.2%; Average loss: 2.8637 Iteration: 3169; Percent complete: 79.2%; Average loss: 2.7818 Iteration: 3170; Percent complete: 79.2%; Average loss: 2.7601 Iteration: 3171; Percent complete: 79.3%; Average loss: 2.8585 Iteration: 3172; Percent complete: 79.3%; Average loss: 2.8436 Iteration: 3173; Percent complete: 79.3%; Average loss: 2.7512 Iteration: 3174; Percent complete: 79.3%; Average loss: 2.6420 Iteration: 3175; Percent complete: 79.4%; Average loss: 2.8074 Iteration: 3176; Percent complete: 79.4%; Average loss: 2.7849 Iteration: 3177; Percent complete: 79.4%; Average loss: 2.6173 Iteration: 3178; Percent complete: 79.5%; Average loss: 2.7528 Iteration: 3179; Percent complete: 79.5%; Average loss: 2.9413 Iteration: 3180; Percent complete: 79.5%; Average loss: 2.7986 Iteration: 3181; Percent complete: 79.5%; Average loss: 2.9402 Iteration: 3182; Percent complete: 79.5%; Average loss: 2.6625 Iteration: 3183; Percent complete: 79.6%; Average loss: 2.9786 Iteration: 3184; Percent complete: 79.6%; Average loss: 2.4683 Iteration: 3185; Percent complete: 79.6%; Average loss: 2.8046 Iteration: 3186; Percent complete: 79.7%; Average loss: 2.8498 Iteration: 3187; Percent complete: 79.7%; Average loss: 2.9366 Iteration: 3188; Percent complete: 79.7%; Average loss: 2.8216 Iteration: 3189; Percent complete: 79.7%; Average loss: 2.6773 Iteration: 3190; Percent complete: 79.8%; Average loss: 2.8041 Iteration: 3191; Percent complete: 79.8%; Average loss: 2.7163 Iteration: 3192; Percent complete: 79.8%; Average loss: 2.9785 Iteration: 3193; Percent complete: 79.8%; Average loss: 2.6885 Iteration: 3194; Percent complete: 79.8%; Average loss: 2.6420 Iteration: 3195; Percent complete: 79.9%; Average loss: 2.7512 Iteration: 3196; Percent complete: 79.9%; Average loss: 2.8320 Iteration: 3197; Percent complete: 79.9%; Average loss: 2.9680 Iteration: 3198; Percent complete: 80.0%; Average loss: 2.8795 Iteration: 3199; Percent complete: 80.0%; Average loss: 2.9356 Iteration: 3200; Percent complete: 80.0%; Average loss: 2.6428 Iteration: 3201; Percent complete: 80.0%; Average loss: 2.8654 Iteration: 3202; Percent complete: 80.0%; Average loss: 2.6256 Iteration: 3203; Percent complete: 80.1%; Average loss: 2.9917 Iteration: 3204; Percent complete: 80.1%; Average loss: 2.6680 Iteration: 3205; Percent complete: 80.1%; Average loss: 2.6719 Iteration: 3206; Percent complete: 80.2%; Average loss: 3.0610 Iteration: 3207; Percent complete: 80.2%; Average loss: 2.7226 Iteration: 3208; Percent complete: 80.2%; Average loss: 2.9555 Iteration: 3209; Percent complete: 80.2%; Average loss: 2.8346 Iteration: 3210; Percent complete: 80.2%; Average loss: 2.6909 Iteration: 3211; Percent complete: 80.3%; Average loss: 2.9653 Iteration: 3212; Percent complete: 80.3%; Average loss: 2.8760 Iteration: 3213; Percent complete: 80.3%; Average loss: 2.8196 Iteration: 3214; Percent complete: 80.3%; Average loss: 2.7713 Iteration: 3215; Percent complete: 80.4%; Average loss: 2.8475 Iteration: 3216; Percent complete: 80.4%; Average loss: 2.7421 Iteration: 3217; Percent complete: 80.4%; Average loss: 3.0080 Iteration: 3218; Percent complete: 80.5%; Average loss: 2.8871 Iteration: 3219; Percent complete: 80.5%; Average loss: 2.7941 Iteration: 3220; Percent complete: 80.5%; Average loss: 2.7841 Iteration: 3221; Percent complete: 80.5%; Average loss: 2.7811 Iteration: 3222; Percent complete: 80.5%; Average loss: 3.0771 Iteration: 3223; Percent complete: 80.6%; Average loss: 3.0156 Iteration: 3224; Percent complete: 80.6%; Average loss: 2.5607 Iteration: 3225; Percent complete: 80.6%; Average loss: 2.5104 Iteration: 3226; Percent complete: 80.7%; Average loss: 2.7959 Iteration: 3227; Percent complete: 80.7%; Average loss: 2.6996 Iteration: 3228; Percent complete: 80.7%; Average loss: 2.6467 Iteration: 3229; Percent complete: 80.7%; Average loss: 2.7564 Iteration: 3230; Percent complete: 80.8%; Average loss: 2.9390 Iteration: 3231; Percent complete: 80.8%; Average loss: 2.8740 Iteration: 3232; Percent complete: 80.8%; Average loss: 2.9176 Iteration: 3233; Percent complete: 80.8%; Average loss: 2.6920 Iteration: 3234; Percent complete: 80.8%; Average loss: 2.6497 Iteration: 3235; Percent complete: 80.9%; Average loss: 2.7984 Iteration: 3236; Percent complete: 80.9%; Average loss: 2.6904 Iteration: 3237; Percent complete: 80.9%; Average loss: 2.9325 Iteration: 3238; Percent complete: 81.0%; Average loss: 2.9462 Iteration: 3239; Percent complete: 81.0%; Average loss: 2.7123 Iteration: 3240; Percent complete: 81.0%; Average loss: 2.8129 Iteration: 3241; Percent complete: 81.0%; Average loss: 2.6593 Iteration: 3242; Percent complete: 81.0%; Average loss: 2.8986 Iteration: 3243; Percent complete: 81.1%; Average loss: 2.7298 Iteration: 3244; Percent complete: 81.1%; Average loss: 2.7725 Iteration: 3245; Percent complete: 81.1%; Average loss: 2.8058 Iteration: 3246; Percent complete: 81.2%; Average loss: 2.7854 Iteration: 3247; Percent complete: 81.2%; Average loss: 2.6503 Iteration: 3248; Percent complete: 81.2%; Average loss: 3.0612 Iteration: 3249; Percent complete: 81.2%; Average loss: 2.8541 Iteration: 3250; Percent complete: 81.2%; Average loss: 2.7792 Iteration: 3251; Percent complete: 81.3%; Average loss: 2.9222 Iteration: 3252; Percent complete: 81.3%; Average loss: 2.7349 Iteration: 3253; Percent complete: 81.3%; Average loss: 2.9061 Iteration: 3254; Percent complete: 81.3%; Average loss: 2.7775 Iteration: 3255; Percent complete: 81.4%; Average loss: 2.9645 Iteration: 3256; Percent complete: 81.4%; Average loss: 2.7014 Iteration: 3257; Percent complete: 81.4%; Average loss: 2.6203 Iteration: 3258; Percent complete: 81.5%; Average loss: 2.8336 Iteration: 3259; Percent complete: 81.5%; Average loss: 2.6712 Iteration: 3260; Percent complete: 81.5%; Average loss: 2.7246 Iteration: 3261; Percent complete: 81.5%; Average loss: 2.8833 Iteration: 3262; Percent complete: 81.5%; Average loss: 3.0622 Iteration: 3263; Percent complete: 81.6%; Average loss: 2.6841 Iteration: 3264; Percent complete: 81.6%; Average loss: 2.8147 Iteration: 3265; Percent complete: 81.6%; Average loss: 2.8503 Iteration: 3266; Percent complete: 81.7%; Average loss: 2.7060 Iteration: 3267; Percent complete: 81.7%; Average loss: 2.5772 Iteration: 3268; Percent complete: 81.7%; Average loss: 2.8852 Iteration: 3269; Percent complete: 81.7%; Average loss: 2.6220 Iteration: 3270; Percent complete: 81.8%; Average loss: 2.6606 Iteration: 3271; Percent complete: 81.8%; Average loss: 3.0053 Iteration: 3272; Percent complete: 81.8%; Average loss: 2.7392 Iteration: 3273; Percent complete: 81.8%; Average loss: 2.7267 Iteration: 3274; Percent complete: 81.8%; Average loss: 2.8132 Iteration: 3275; Percent complete: 81.9%; Average loss: 2.6898 Iteration: 3276; Percent complete: 81.9%; Average loss: 2.7207 Iteration: 3277; Percent complete: 81.9%; Average loss: 2.7211 Iteration: 3278; Percent complete: 82.0%; Average loss: 2.8799 Iteration: 3279; Percent complete: 82.0%; Average loss: 2.7684 Iteration: 3280; Percent complete: 82.0%; Average loss: 2.8559 Iteration: 3281; Percent complete: 82.0%; Average loss: 2.7043 Iteration: 3282; Percent complete: 82.0%; Average loss: 2.6257 Iteration: 3283; Percent complete: 82.1%; Average loss: 2.6796 Iteration: 3284; Percent complete: 82.1%; Average loss: 2.8473 Iteration: 3285; Percent complete: 82.1%; Average loss: 2.5246 Iteration: 3286; Percent complete: 82.2%; Average loss: 2.6474 Iteration: 3287; Percent complete: 82.2%; Average loss: 2.8722 Iteration: 3288; Percent complete: 82.2%; Average loss: 2.7805 Iteration: 3289; Percent complete: 82.2%; Average loss: 2.6960 Iteration: 3290; Percent complete: 82.2%; Average loss: 2.8419 Iteration: 3291; Percent complete: 82.3%; Average loss: 2.7196 Iteration: 3292; Percent complete: 82.3%; Average loss: 2.8665 Iteration: 3293; Percent complete: 82.3%; Average loss: 2.7691 Iteration: 3294; Percent complete: 82.3%; Average loss: 2.6405 Iteration: 3295; Percent complete: 82.4%; Average loss: 2.7775 Iteration: 3296; Percent complete: 82.4%; Average loss: 2.7935 Iteration: 3297; Percent complete: 82.4%; Average loss: 2.8025 Iteration: 3298; Percent complete: 82.5%; Average loss: 2.7337 Iteration: 3299; Percent complete: 82.5%; Average loss: 2.8164 Iteration: 3300; Percent complete: 82.5%; Average loss: 2.5449 Iteration: 3301; Percent complete: 82.5%; Average loss: 2.8276 Iteration: 3302; Percent complete: 82.5%; Average loss: 2.9934 Iteration: 3303; Percent complete: 82.6%; Average loss: 2.9912 Iteration: 3304; Percent complete: 82.6%; Average loss: 2.8345 Iteration: 3305; Percent complete: 82.6%; Average loss: 2.4922 Iteration: 3306; Percent complete: 82.7%; Average loss: 2.8092 Iteration: 3307; Percent complete: 82.7%; Average loss: 2.7085 Iteration: 3308; Percent complete: 82.7%; Average loss: 2.7142 Iteration: 3309; Percent complete: 82.7%; Average loss: 2.5533 Iteration: 3310; Percent complete: 82.8%; Average loss: 2.9535 Iteration: 3311; Percent complete: 82.8%; Average loss: 2.6505 Iteration: 3312; Percent complete: 82.8%; Average loss: 2.9167 Iteration: 3313; Percent complete: 82.8%; Average loss: 2.8041 Iteration: 3314; Percent complete: 82.8%; Average loss: 2.6183 Iteration: 3315; Percent complete: 82.9%; Average loss: 2.8982 Iteration: 3316; Percent complete: 82.9%; Average loss: 2.8889 Iteration: 3317; Percent complete: 82.9%; Average loss: 2.8831 Iteration: 3318; Percent complete: 83.0%; Average loss: 2.6181 Iteration: 3319; Percent complete: 83.0%; Average loss: 2.7601 Iteration: 3320; Percent complete: 83.0%; Average loss: 2.7704 Iteration: 3321; Percent complete: 83.0%; Average loss: 2.9006 Iteration: 3322; Percent complete: 83.0%; Average loss: 2.6834 Iteration: 3323; Percent complete: 83.1%; Average loss: 2.8101 Iteration: 3324; Percent complete: 83.1%; Average loss: 2.9290 Iteration: 3325; Percent complete: 83.1%; Average loss: 2.6750 Iteration: 3326; Percent complete: 83.2%; Average loss: 2.8915 Iteration: 3327; Percent complete: 83.2%; Average loss: 2.5979 Iteration: 3328; Percent complete: 83.2%; Average loss: 2.7944 Iteration: 3329; Percent complete: 83.2%; Average loss: 2.6119 Iteration: 3330; Percent complete: 83.2%; Average loss: 2.5653 Iteration: 3331; Percent complete: 83.3%; Average loss: 2.8221 Iteration: 3332; Percent complete: 83.3%; Average loss: 2.8275 Iteration: 3333; Percent complete: 83.3%; Average loss: 3.1093 Iteration: 3334; Percent complete: 83.4%; Average loss: 2.8110 Iteration: 3335; Percent complete: 83.4%; Average loss: 2.7475 Iteration: 3336; Percent complete: 83.4%; Average loss: 2.6318 Iteration: 3337; Percent complete: 83.4%; Average loss: 2.5275 Iteration: 3338; Percent complete: 83.5%; Average loss: 2.8470 Iteration: 3339; Percent complete: 83.5%; Average loss: 2.7845 Iteration: 3340; Percent complete: 83.5%; Average loss: 2.9800 Iteration: 3341; Percent complete: 83.5%; Average loss: 2.9160 Iteration: 3342; Percent complete: 83.5%; Average loss: 2.7011 Iteration: 3343; Percent complete: 83.6%; Average loss: 2.8546 Iteration: 3344; Percent complete: 83.6%; Average loss: 2.7385 Iteration: 3345; Percent complete: 83.6%; Average loss: 2.8118 Iteration: 3346; Percent complete: 83.7%; Average loss: 2.7529 Iteration: 3347; Percent complete: 83.7%; Average loss: 2.7734 Iteration: 3348; Percent complete: 83.7%; Average loss: 2.7753 Iteration: 3349; Percent complete: 83.7%; Average loss: 2.6467 Iteration: 3350; Percent complete: 83.8%; Average loss: 2.7972 Iteration: 3351; Percent complete: 83.8%; Average loss: 3.0314 Iteration: 3352; Percent complete: 83.8%; Average loss: 2.9924 Iteration: 3353; Percent complete: 83.8%; Average loss: 2.7289 Iteration: 3354; Percent complete: 83.9%; Average loss: 2.7298 Iteration: 3355; Percent complete: 83.9%; Average loss: 2.6479 Iteration: 3356; Percent complete: 83.9%; Average loss: 3.0786 Iteration: 3357; Percent complete: 83.9%; Average loss: 2.5519 Iteration: 3358; Percent complete: 84.0%; Average loss: 2.6703 Iteration: 3359; Percent complete: 84.0%; Average loss: 3.1028 Iteration: 3360; Percent complete: 84.0%; Average loss: 2.6022 Iteration: 3361; Percent complete: 84.0%; Average loss: 2.8689 Iteration: 3362; Percent complete: 84.0%; Average loss: 2.4497 Iteration: 3363; Percent complete: 84.1%; Average loss: 2.7613 Iteration: 3364; Percent complete: 84.1%; Average loss: 2.9651 Iteration: 3365; Percent complete: 84.1%; Average loss: 2.6612 Iteration: 3366; Percent complete: 84.2%; Average loss: 2.7400 Iteration: 3367; Percent complete: 84.2%; Average loss: 2.7422 Iteration: 3368; Percent complete: 84.2%; Average loss: 2.6190 Iteration: 3369; Percent complete: 84.2%; Average loss: 2.8219 Iteration: 3370; Percent complete: 84.2%; Average loss: 2.7383 Iteration: 3371; Percent complete: 84.3%; Average loss: 2.9310 Iteration: 3372; Percent complete: 84.3%; Average loss: 2.7052 Iteration: 3373; Percent complete: 84.3%; Average loss: 2.7730 Iteration: 3374; Percent complete: 84.4%; Average loss: 2.6401 Iteration: 3375; Percent complete: 84.4%; Average loss: 2.8685 Iteration: 3376; Percent complete: 84.4%; Average loss: 2.7484 Iteration: 3377; Percent complete: 84.4%; Average loss: 2.8325 Iteration: 3378; Percent complete: 84.5%; Average loss: 2.6657 Iteration: 3379; Percent complete: 84.5%; Average loss: 3.0674 Iteration: 3380; Percent complete: 84.5%; Average loss: 2.8652 Iteration: 3381; Percent complete: 84.5%; Average loss: 2.9784 Iteration: 3382; Percent complete: 84.5%; Average loss: 2.7138 Iteration: 3383; Percent complete: 84.6%; Average loss: 2.8048 Iteration: 3384; Percent complete: 84.6%; Average loss: 2.7399 Iteration: 3385; Percent complete: 84.6%; Average loss: 2.8351 Iteration: 3386; Percent complete: 84.7%; Average loss: 2.7233 Iteration: 3387; Percent complete: 84.7%; Average loss: 2.6694 Iteration: 3388; Percent complete: 84.7%; Average loss: 2.6180 Iteration: 3389; Percent complete: 84.7%; Average loss: 2.7048 Iteration: 3390; Percent complete: 84.8%; Average loss: 2.9656 Iteration: 3391; Percent complete: 84.8%; Average loss: 2.5731 Iteration: 3392; Percent complete: 84.8%; Average loss: 2.5235 Iteration: 3393; Percent complete: 84.8%; Average loss: 2.8471 Iteration: 3394; Percent complete: 84.9%; Average loss: 2.8093 Iteration: 3395; Percent complete: 84.9%; Average loss: 2.9317 Iteration: 3396; Percent complete: 84.9%; Average loss: 2.5665 Iteration: 3397; Percent complete: 84.9%; Average loss: 2.7594 Iteration: 3398; Percent complete: 85.0%; Average loss: 3.1906 Iteration: 3399; Percent complete: 85.0%; Average loss: 2.8476 Iteration: 3400; Percent complete: 85.0%; Average loss: 2.8356 Iteration: 3401; Percent complete: 85.0%; Average loss: 2.6438 Iteration: 3402; Percent complete: 85.0%; Average loss: 2.7798 Iteration: 3403; Percent complete: 85.1%; Average loss: 2.9393 Iteration: 3404; Percent complete: 85.1%; Average loss: 2.5771 Iteration: 3405; Percent complete: 85.1%; Average loss: 2.7987 Iteration: 3406; Percent complete: 85.2%; Average loss: 2.4453 Iteration: 3407; Percent complete: 85.2%; Average loss: 2.7003 Iteration: 3408; Percent complete: 85.2%; Average loss: 2.8233 Iteration: 3409; Percent complete: 85.2%; Average loss: 2.7118 Iteration: 3410; Percent complete: 85.2%; Average loss: 2.7159 Iteration: 3411; Percent complete: 85.3%; Average loss: 2.7632 Iteration: 3412; Percent complete: 85.3%; Average loss: 3.0476 Iteration: 3413; Percent complete: 85.3%; Average loss: 2.7619 Iteration: 3414; Percent complete: 85.4%; Average loss: 2.8565 Iteration: 3415; Percent complete: 85.4%; Average loss: 2.6504 Iteration: 3416; Percent complete: 85.4%; Average loss: 2.9379 Iteration: 3417; Percent complete: 85.4%; Average loss: 2.9231 Iteration: 3418; Percent complete: 85.5%; Average loss: 2.8263 Iteration: 3419; Percent complete: 85.5%; Average loss: 2.4794 Iteration: 3420; Percent complete: 85.5%; Average loss: 2.9195 Iteration: 3421; Percent complete: 85.5%; Average loss: 2.8049 Iteration: 3422; Percent complete: 85.5%; Average loss: 2.8836 Iteration: 3423; Percent complete: 85.6%; Average loss: 2.8092 Iteration: 3424; Percent complete: 85.6%; Average loss: 2.8893 Iteration: 3425; Percent complete: 85.6%; Average loss: 2.6495 Iteration: 3426; Percent complete: 85.7%; Average loss: 2.8744 Iteration: 3427; Percent complete: 85.7%; Average loss: 2.7257 Iteration: 3428; Percent complete: 85.7%; Average loss: 2.6152 Iteration: 3429; Percent complete: 85.7%; Average loss: 2.6925 Iteration: 3430; Percent complete: 85.8%; Average loss: 2.5822 Iteration: 3431; Percent complete: 85.8%; Average loss: 2.8095 Iteration: 3432; Percent complete: 85.8%; Average loss: 2.7983 Iteration: 3433; Percent complete: 85.8%; Average loss: 2.7745 Iteration: 3434; Percent complete: 85.9%; Average loss: 3.0565 Iteration: 3435; Percent complete: 85.9%; Average loss: 2.8169 Iteration: 3436; Percent complete: 85.9%; Average loss: 2.8622 Iteration: 3437; Percent complete: 85.9%; Average loss: 3.0570 Iteration: 3438; Percent complete: 86.0%; Average loss: 2.8516 Iteration: 3439; Percent complete: 86.0%; Average loss: 2.7512 Iteration: 3440; Percent complete: 86.0%; Average loss: 2.7595 Iteration: 3441; Percent complete: 86.0%; Average loss: 2.6732 Iteration: 3442; Percent complete: 86.1%; Average loss: 3.0591 Iteration: 3443; Percent complete: 86.1%; Average loss: 2.9258 Iteration: 3444; Percent complete: 86.1%; Average loss: 2.7204 Iteration: 3445; Percent complete: 86.1%; Average loss: 2.5782 Iteration: 3446; Percent complete: 86.2%; Average loss: 2.6478 Iteration: 3447; Percent complete: 86.2%; Average loss: 2.7115 Iteration: 3448; Percent complete: 86.2%; Average loss: 2.6866 Iteration: 3449; Percent complete: 86.2%; Average loss: 2.5733 Iteration: 3450; Percent complete: 86.2%; Average loss: 2.7153 Iteration: 3451; Percent complete: 86.3%; Average loss: 2.4423 Iteration: 3452; Percent complete: 86.3%; Average loss: 2.6738 Iteration: 3453; Percent complete: 86.3%; Average loss: 2.7173 Iteration: 3454; Percent complete: 86.4%; Average loss: 2.8420 Iteration: 3455; Percent complete: 86.4%; Average loss: 2.6068 Iteration: 3456; Percent complete: 86.4%; Average loss: 2.7328 Iteration: 3457; Percent complete: 86.4%; Average loss: 2.9213 Iteration: 3458; Percent complete: 86.5%; Average loss: 2.8637 Iteration: 3459; Percent complete: 86.5%; Average loss: 2.7347 Iteration: 3460; Percent complete: 86.5%; Average loss: 2.5633 Iteration: 3461; Percent complete: 86.5%; Average loss: 2.7679 Iteration: 3462; Percent complete: 86.6%; Average loss: 2.8367 Iteration: 3463; Percent complete: 86.6%; Average loss: 2.6554 Iteration: 3464; Percent complete: 86.6%; Average loss: 2.5820 Iteration: 3465; Percent complete: 86.6%; Average loss: 2.6613 Iteration: 3466; Percent complete: 86.7%; Average loss: 2.5926 Iteration: 3467; Percent complete: 86.7%; Average loss: 2.6492 Iteration: 3468; Percent complete: 86.7%; Average loss: 2.9478 Iteration: 3469; Percent complete: 86.7%; Average loss: 2.7137 Iteration: 3470; Percent complete: 86.8%; Average loss: 2.7513 Iteration: 3471; Percent complete: 86.8%; Average loss: 2.6462 Iteration: 3472; Percent complete: 86.8%; Average loss: 2.8227 Iteration: 3473; Percent complete: 86.8%; Average loss: 2.7623 Iteration: 3474; Percent complete: 86.9%; Average loss: 2.7577 Iteration: 3475; Percent complete: 86.9%; Average loss: 2.6858 Iteration: 3476; Percent complete: 86.9%; Average loss: 2.4385 Iteration: 3477; Percent complete: 86.9%; Average loss: 2.6848 Iteration: 3478; Percent complete: 87.0%; Average loss: 2.8411 Iteration: 3479; Percent complete: 87.0%; Average loss: 2.8551 Iteration: 3480; Percent complete: 87.0%; Average loss: 2.7238 Iteration: 3481; Percent complete: 87.0%; Average loss: 2.9112 Iteration: 3482; Percent complete: 87.1%; Average loss: 2.7277 Iteration: 3483; Percent complete: 87.1%; Average loss: 2.7161 Iteration: 3484; Percent complete: 87.1%; Average loss: 2.5863 Iteration: 3485; Percent complete: 87.1%; Average loss: 2.5587 Iteration: 3486; Percent complete: 87.2%; Average loss: 2.4542 Iteration: 3487; Percent complete: 87.2%; Average loss: 2.8323 Iteration: 3488; Percent complete: 87.2%; Average loss: 2.4531 Iteration: 3489; Percent complete: 87.2%; Average loss: 2.9305 Iteration: 3490; Percent complete: 87.2%; Average loss: 2.8885 Iteration: 3491; Percent complete: 87.3%; Average loss: 2.7853 Iteration: 3492; Percent complete: 87.3%; Average loss: 2.5887 Iteration: 3493; Percent complete: 87.3%; Average loss: 2.6556 Iteration: 3494; Percent complete: 87.4%; Average loss: 2.7188 Iteration: 3495; Percent complete: 87.4%; Average loss: 2.8360 Iteration: 3496; Percent complete: 87.4%; Average loss: 2.7188 Iteration: 3497; Percent complete: 87.4%; Average loss: 2.5011 Iteration: 3498; Percent complete: 87.5%; Average loss: 2.7631 Iteration: 3499; Percent complete: 87.5%; Average loss: 2.7893 Iteration: 3500; Percent complete: 87.5%; Average loss: 2.6360 Iteration: 3501; Percent complete: 87.5%; Average loss: 2.7805 Iteration: 3502; Percent complete: 87.5%; Average loss: 2.9847 Iteration: 3503; Percent complete: 87.6%; Average loss: 2.5534 Iteration: 3504; Percent complete: 87.6%; Average loss: 2.7519 Iteration: 3505; Percent complete: 87.6%; Average loss: 2.8532 Iteration: 3506; Percent complete: 87.6%; Average loss: 2.7373 Iteration: 3507; Percent complete: 87.7%; Average loss: 2.5770 Iteration: 3508; Percent complete: 87.7%; Average loss: 2.7735 Iteration: 3509; Percent complete: 87.7%; Average loss: 2.7536 Iteration: 3510; Percent complete: 87.8%; Average loss: 2.9242 Iteration: 3511; Percent complete: 87.8%; Average loss: 2.6295 Iteration: 3512; Percent complete: 87.8%; Average loss: 2.6780 Iteration: 3513; Percent complete: 87.8%; Average loss: 2.5630 Iteration: 3514; Percent complete: 87.8%; Average loss: 2.4088 Iteration: 3515; Percent complete: 87.9%; Average loss: 2.9230 Iteration: 3516; Percent complete: 87.9%; Average loss: 2.7618 Iteration: 3517; Percent complete: 87.9%; Average loss: 2.8721 Iteration: 3518; Percent complete: 87.9%; Average loss: 2.8440 Iteration: 3519; Percent complete: 88.0%; Average loss: 2.6416 Iteration: 3520; Percent complete: 88.0%; Average loss: 2.9133 Iteration: 3521; Percent complete: 88.0%; Average loss: 2.6633 Iteration: 3522; Percent complete: 88.0%; Average loss: 2.9597 Iteration: 3523; Percent complete: 88.1%; Average loss: 2.7072 Iteration: 3524; Percent complete: 88.1%; Average loss: 2.5608 Iteration: 3525; Percent complete: 88.1%; Average loss: 2.5661 Iteration: 3526; Percent complete: 88.1%; Average loss: 2.6540 Iteration: 3527; Percent complete: 88.2%; Average loss: 2.8961 Iteration: 3528; Percent complete: 88.2%; Average loss: 2.7178 Iteration: 3529; Percent complete: 88.2%; Average loss: 2.6997 Iteration: 3530; Percent complete: 88.2%; Average loss: 2.9529 Iteration: 3531; Percent complete: 88.3%; Average loss: 2.6818 Iteration: 3532; Percent complete: 88.3%; Average loss: 2.6512 Iteration: 3533; Percent complete: 88.3%; Average loss: 2.5660 Iteration: 3534; Percent complete: 88.3%; Average loss: 2.7073 Iteration: 3535; Percent complete: 88.4%; Average loss: 2.7309 Iteration: 3536; Percent complete: 88.4%; Average loss: 2.6608 Iteration: 3537; Percent complete: 88.4%; Average loss: 2.6869 Iteration: 3538; Percent complete: 88.4%; Average loss: 2.5121 Iteration: 3539; Percent complete: 88.5%; Average loss: 2.7655 Iteration: 3540; Percent complete: 88.5%; Average loss: 2.8573 Iteration: 3541; Percent complete: 88.5%; Average loss: 2.6960 Iteration: 3542; Percent complete: 88.5%; Average loss: 3.0470 Iteration: 3543; Percent complete: 88.6%; Average loss: 2.4852 Iteration: 3544; Percent complete: 88.6%; Average loss: 2.5999 Iteration: 3545; Percent complete: 88.6%; Average loss: 2.8288 Iteration: 3546; Percent complete: 88.6%; Average loss: 2.5044 Iteration: 3547; Percent complete: 88.7%; Average loss: 2.6438 Iteration: 3548; Percent complete: 88.7%; Average loss: 2.8283 Iteration: 3549; Percent complete: 88.7%; Average loss: 2.6391 Iteration: 3550; Percent complete: 88.8%; Average loss: 2.7177 Iteration: 3551; Percent complete: 88.8%; Average loss: 2.6063 Iteration: 3552; Percent complete: 88.8%; Average loss: 2.8684 Iteration: 3553; Percent complete: 88.8%; Average loss: 2.6332 Iteration: 3554; Percent complete: 88.8%; Average loss: 2.8141 Iteration: 3555; Percent complete: 88.9%; Average loss: 2.7931 Iteration: 3556; Percent complete: 88.9%; Average loss: 2.6413 Iteration: 3557; Percent complete: 88.9%; Average loss: 2.9695 Iteration: 3558; Percent complete: 88.9%; Average loss: 2.4904 Iteration: 3559; Percent complete: 89.0%; Average loss: 2.7186 Iteration: 3560; Percent complete: 89.0%; Average loss: 2.5988 Iteration: 3561; Percent complete: 89.0%; Average loss: 3.0710 Iteration: 3562; Percent complete: 89.0%; Average loss: 2.4977 Iteration: 3563; Percent complete: 89.1%; Average loss: 2.9020 Iteration: 3564; Percent complete: 89.1%; Average loss: 2.6981 Iteration: 3565; Percent complete: 89.1%; Average loss: 2.7470 Iteration: 3566; Percent complete: 89.1%; Average loss: 2.7962 Iteration: 3567; Percent complete: 89.2%; Average loss: 2.7116 Iteration: 3568; Percent complete: 89.2%; Average loss: 2.9167 Iteration: 3569; Percent complete: 89.2%; Average loss: 2.6647 Iteration: 3570; Percent complete: 89.2%; Average loss: 2.8251 Iteration: 3571; Percent complete: 89.3%; Average loss: 2.9812 Iteration: 3572; Percent complete: 89.3%; Average loss: 2.8171 Iteration: 3573; Percent complete: 89.3%; Average loss: 2.5990 Iteration: 3574; Percent complete: 89.3%; Average loss: 2.8634 Iteration: 3575; Percent complete: 89.4%; Average loss: 2.7031 Iteration: 3576; Percent complete: 89.4%; Average loss: 2.7238 Iteration: 3577; Percent complete: 89.4%; Average loss: 2.7816 Iteration: 3578; Percent complete: 89.5%; Average loss: 2.4978 Iteration: 3579; Percent complete: 89.5%; Average loss: 2.5910 Iteration: 3580; Percent complete: 89.5%; Average loss: 2.7579 Iteration: 3581; Percent complete: 89.5%; Average loss: 2.7716 Iteration: 3582; Percent complete: 89.5%; Average loss: 2.7300 Iteration: 3583; Percent complete: 89.6%; Average loss: 2.7158 Iteration: 3584; Percent complete: 89.6%; Average loss: 2.7002 Iteration: 3585; Percent complete: 89.6%; Average loss: 2.6430 Iteration: 3586; Percent complete: 89.6%; Average loss: 2.5825 Iteration: 3587; Percent complete: 89.7%; Average loss: 2.9609 Iteration: 3588; Percent complete: 89.7%; Average loss: 2.8616 Iteration: 3589; Percent complete: 89.7%; Average loss: 2.8664 Iteration: 3590; Percent complete: 89.8%; Average loss: 2.5998 Iteration: 3591; Percent complete: 89.8%; Average loss: 2.6829 Iteration: 3592; Percent complete: 89.8%; Average loss: 2.7706 Iteration: 3593; Percent complete: 89.8%; Average loss: 2.6104 Iteration: 3594; Percent complete: 89.8%; Average loss: 2.8460 Iteration: 3595; Percent complete: 89.9%; Average loss: 2.7226 Iteration: 3596; Percent complete: 89.9%; Average loss: 2.4558 Iteration: 3597; Percent complete: 89.9%; Average loss: 2.5271 Iteration: 3598; Percent complete: 90.0%; Average loss: 2.7693 Iteration: 3599; Percent complete: 90.0%; Average loss: 2.6663 Iteration: 3600; Percent complete: 90.0%; Average loss: 2.7900 Iteration: 3601; Percent complete: 90.0%; Average loss: 2.6120 Iteration: 3602; Percent complete: 90.0%; Average loss: 2.5945 Iteration: 3603; Percent complete: 90.1%; Average loss: 2.8418 Iteration: 3604; Percent complete: 90.1%; Average loss: 2.6728 Iteration: 3605; Percent complete: 90.1%; Average loss: 2.7481 Iteration: 3606; Percent complete: 90.1%; Average loss: 2.7758 Iteration: 3607; Percent complete: 90.2%; Average loss: 2.6233 Iteration: 3608; Percent complete: 90.2%; Average loss: 2.5680 Iteration: 3609; Percent complete: 90.2%; Average loss: 2.4353 Iteration: 3610; Percent complete: 90.2%; Average loss: 2.7189 Iteration: 3611; Percent complete: 90.3%; Average loss: 2.7510 Iteration: 3612; Percent complete: 90.3%; Average loss: 2.8143 Iteration: 3613; Percent complete: 90.3%; Average loss: 2.8152 Iteration: 3614; Percent complete: 90.3%; Average loss: 2.8344 Iteration: 3615; Percent complete: 90.4%; Average loss: 2.8662 Iteration: 3616; Percent complete: 90.4%; Average loss: 2.9766 Iteration: 3617; Percent complete: 90.4%; Average loss: 2.6965 Iteration: 3618; Percent complete: 90.5%; Average loss: 2.7794 Iteration: 3619; Percent complete: 90.5%; Average loss: 2.6095 Iteration: 3620; Percent complete: 90.5%; Average loss: 2.7045 Iteration: 3621; Percent complete: 90.5%; Average loss: 2.7062 Iteration: 3622; Percent complete: 90.5%; Average loss: 2.8503 Iteration: 3623; Percent complete: 90.6%; Average loss: 2.5181 Iteration: 3624; Percent complete: 90.6%; Average loss: 2.7004 Iteration: 3625; Percent complete: 90.6%; Average loss: 2.4640 Iteration: 3626; Percent complete: 90.6%; Average loss: 2.6130 Iteration: 3627; Percent complete: 90.7%; Average loss: 2.6244 Iteration: 3628; Percent complete: 90.7%; Average loss: 2.6534 Iteration: 3629; Percent complete: 90.7%; Average loss: 2.5128 Iteration: 3630; Percent complete: 90.8%; Average loss: 2.5921 Iteration: 3631; Percent complete: 90.8%; Average loss: 2.9007 Iteration: 3632; Percent complete: 90.8%; Average loss: 2.7359 Iteration: 3633; Percent complete: 90.8%; Average loss: 2.5678 Iteration: 3634; Percent complete: 90.8%; Average loss: 2.7303 Iteration: 3635; Percent complete: 90.9%; Average loss: 2.7013 Iteration: 3636; Percent complete: 90.9%; Average loss: 2.7520 Iteration: 3637; Percent complete: 90.9%; Average loss: 2.6942 Iteration: 3638; Percent complete: 91.0%; Average loss: 2.6487 Iteration: 3639; Percent complete: 91.0%; Average loss: 2.5402 Iteration: 3640; Percent complete: 91.0%; Average loss: 2.8736 Iteration: 3641; Percent complete: 91.0%; Average loss: 2.6484 Iteration: 3642; Percent complete: 91.0%; Average loss: 2.5924 Iteration: 3643; Percent complete: 91.1%; Average loss: 2.4507 Iteration: 3644; Percent complete: 91.1%; Average loss: 2.6760 Iteration: 3645; Percent complete: 91.1%; Average loss: 2.8394 Iteration: 3646; Percent complete: 91.1%; Average loss: 2.8046 Iteration: 3647; Percent complete: 91.2%; Average loss: 2.8666 Iteration: 3648; Percent complete: 91.2%; Average loss: 2.6954 Iteration: 3649; Percent complete: 91.2%; Average loss: 2.7876 Iteration: 3650; Percent complete: 91.2%; Average loss: 2.8933 Iteration: 3651; Percent complete: 91.3%; Average loss: 2.7134 Iteration: 3652; Percent complete: 91.3%; Average loss: 2.7161 Iteration: 3653; Percent complete: 91.3%; Average loss: 2.6237 Iteration: 3654; Percent complete: 91.3%; Average loss: 2.7767 Iteration: 3655; Percent complete: 91.4%; Average loss: 2.5633 Iteration: 3656; Percent complete: 91.4%; Average loss: 2.6643 Iteration: 3657; Percent complete: 91.4%; Average loss: 2.7749 Iteration: 3658; Percent complete: 91.5%; Average loss: 2.7923 Iteration: 3659; Percent complete: 91.5%; Average loss: 2.8654 Iteration: 3660; Percent complete: 91.5%; Average loss: 2.5470 Iteration: 3661; Percent complete: 91.5%; Average loss: 2.7791 Iteration: 3662; Percent complete: 91.5%; Average loss: 2.8268 Iteration: 3663; Percent complete: 91.6%; Average loss: 3.0794 Iteration: 3664; Percent complete: 91.6%; Average loss: 2.6716 Iteration: 3665; Percent complete: 91.6%; Average loss: 2.6613 Iteration: 3666; Percent complete: 91.6%; Average loss: 2.7050 Iteration: 3667; Percent complete: 91.7%; Average loss: 2.4866 Iteration: 3668; Percent complete: 91.7%; Average loss: 2.9096 Iteration: 3669; Percent complete: 91.7%; Average loss: 2.7074 Iteration: 3670; Percent complete: 91.8%; Average loss: 2.7392 Iteration: 3671; Percent complete: 91.8%; Average loss: 2.7101 Iteration: 3672; Percent complete: 91.8%; Average loss: 2.8422 Iteration: 3673; Percent complete: 91.8%; Average loss: 2.8629 Iteration: 3674; Percent complete: 91.8%; Average loss: 2.6977 Iteration: 3675; Percent complete: 91.9%; Average loss: 2.8871 Iteration: 3676; Percent complete: 91.9%; Average loss: 2.5630 Iteration: 3677; Percent complete: 91.9%; Average loss: 2.8389 Iteration: 3678; Percent complete: 92.0%; Average loss: 2.3643 Iteration: 3679; Percent complete: 92.0%; Average loss: 2.4070 Iteration: 3680; Percent complete: 92.0%; Average loss: 2.6462 Iteration: 3681; Percent complete: 92.0%; Average loss: 2.7410 Iteration: 3682; Percent complete: 92.0%; Average loss: 2.5481 Iteration: 3683; Percent complete: 92.1%; Average loss: 2.5981 Iteration: 3684; Percent complete: 92.1%; Average loss: 2.8090 Iteration: 3685; Percent complete: 92.1%; Average loss: 2.5831 Iteration: 3686; Percent complete: 92.2%; Average loss: 2.5911 Iteration: 3687; Percent complete: 92.2%; Average loss: 2.6065 Iteration: 3688; Percent complete: 92.2%; Average loss: 2.5044 Iteration: 3689; Percent complete: 92.2%; Average loss: 2.5287 Iteration: 3690; Percent complete: 92.2%; Average loss: 2.5806 Iteration: 3691; Percent complete: 92.3%; Average loss: 2.6812 Iteration: 3692; Percent complete: 92.3%; Average loss: 2.5288 Iteration: 3693; Percent complete: 92.3%; Average loss: 2.7254 Iteration: 3694; Percent complete: 92.3%; Average loss: 2.6593 Iteration: 3695; Percent complete: 92.4%; Average loss: 2.5323 Iteration: 3696; Percent complete: 92.4%; Average loss: 2.5914 Iteration: 3697; Percent complete: 92.4%; Average loss: 2.5428 Iteration: 3698; Percent complete: 92.5%; Average loss: 2.6581 Iteration: 3699; Percent complete: 92.5%; Average loss: 2.8346 Iteration: 3700; Percent complete: 92.5%; Average loss: 2.5997 Iteration: 3701; Percent complete: 92.5%; Average loss: 2.7347 Iteration: 3702; Percent complete: 92.5%; Average loss: 2.7461 Iteration: 3703; Percent complete: 92.6%; Average loss: 2.8748 Iteration: 3704; Percent complete: 92.6%; Average loss: 2.5956 Iteration: 3705; Percent complete: 92.6%; Average loss: 2.5498 Iteration: 3706; Percent complete: 92.7%; Average loss: 2.6605 Iteration: 3707; Percent complete: 92.7%; Average loss: 2.6263 Iteration: 3708; Percent complete: 92.7%; Average loss: 2.6456 Iteration: 3709; Percent complete: 92.7%; Average loss: 2.6325 Iteration: 3710; Percent complete: 92.8%; Average loss: 2.7725 Iteration: 3711; Percent complete: 92.8%; Average loss: 2.7409 Iteration: 3712; Percent complete: 92.8%; Average loss: 2.7805 Iteration: 3713; Percent complete: 92.8%; Average loss: 2.4792 Iteration: 3714; Percent complete: 92.8%; Average loss: 2.7833 Iteration: 3715; Percent complete: 92.9%; Average loss: 2.8362 Iteration: 3716; Percent complete: 92.9%; Average loss: 2.4989 Iteration: 3717; Percent complete: 92.9%; Average loss: 2.5389 Iteration: 3718; Percent complete: 93.0%; Average loss: 2.6619 Iteration: 3719; Percent complete: 93.0%; Average loss: 2.6547 Iteration: 3720; Percent complete: 93.0%; Average loss: 2.6203 Iteration: 3721; Percent complete: 93.0%; Average loss: 2.6934 Iteration: 3722; Percent complete: 93.0%; Average loss: 2.5123 Iteration: 3723; Percent complete: 93.1%; Average loss: 2.4109 Iteration: 3724; Percent complete: 93.1%; Average loss: 2.5620 Iteration: 3725; Percent complete: 93.1%; Average loss: 2.5432 Iteration: 3726; Percent complete: 93.2%; Average loss: 2.7950 Iteration: 3727; Percent complete: 93.2%; Average loss: 2.7883 Iteration: 3728; Percent complete: 93.2%; Average loss: 2.7214 Iteration: 3729; Percent complete: 93.2%; Average loss: 2.6044 Iteration: 3730; Percent complete: 93.2%; Average loss: 2.7774 Iteration: 3731; Percent complete: 93.3%; Average loss: 2.6117 Iteration: 3732; Percent complete: 93.3%; Average loss: 2.5966 Iteration: 3733; Percent complete: 93.3%; Average loss: 2.6373 Iteration: 3734; Percent complete: 93.3%; Average loss: 2.4890 Iteration: 3735; Percent complete: 93.4%; Average loss: 2.6568 Iteration: 3736; Percent complete: 93.4%; Average loss: 2.5142 Iteration: 3737; Percent complete: 93.4%; Average loss: 2.7833 Iteration: 3738; Percent complete: 93.5%; Average loss: 2.6275 Iteration: 3739; Percent complete: 93.5%; Average loss: 2.6602 Iteration: 3740; Percent complete: 93.5%; Average loss: 2.7292 Iteration: 3741; Percent complete: 93.5%; Average loss: 2.8702 Iteration: 3742; Percent complete: 93.5%; Average loss: 2.7592 Iteration: 3743; Percent complete: 93.6%; Average loss: 2.6699 Iteration: 3744; Percent complete: 93.6%; Average loss: 2.8243 Iteration: 3745; Percent complete: 93.6%; Average loss: 2.6528 Iteration: 3746; Percent complete: 93.7%; Average loss: 2.8849 Iteration: 3747; Percent complete: 93.7%; Average loss: 2.4702 Iteration: 3748; Percent complete: 93.7%; Average loss: 2.6245 Iteration: 3749; Percent complete: 93.7%; Average loss: 2.9023 Iteration: 3750; Percent complete: 93.8%; Average loss: 2.5852 Iteration: 3751; Percent complete: 93.8%; Average loss: 2.5216 Iteration: 3752; Percent complete: 93.8%; Average loss: 2.8775 Iteration: 3753; Percent complete: 93.8%; Average loss: 2.5398 Iteration: 3754; Percent complete: 93.8%; Average loss: 2.6048 Iteration: 3755; Percent complete: 93.9%; Average loss: 2.7164 Iteration: 3756; Percent complete: 93.9%; Average loss: 2.6510 Iteration: 3757; Percent complete: 93.9%; Average loss: 2.6593 Iteration: 3758; Percent complete: 94.0%; Average loss: 2.7515 Iteration: 3759; Percent complete: 94.0%; Average loss: 2.7016 Iteration: 3760; Percent complete: 94.0%; Average loss: 2.7312 Iteration: 3761; Percent complete: 94.0%; Average loss: 2.7067 Iteration: 3762; Percent complete: 94.0%; Average loss: 2.7403 Iteration: 3763; Percent complete: 94.1%; Average loss: 2.7714 Iteration: 3764; Percent complete: 94.1%; Average loss: 2.7488 Iteration: 3765; Percent complete: 94.1%; Average loss: 2.5779 Iteration: 3766; Percent complete: 94.2%; Average loss: 2.7491 Iteration: 3767; Percent complete: 94.2%; Average loss: 2.7735 Iteration: 3768; Percent complete: 94.2%; Average loss: 2.4522 Iteration: 3769; Percent complete: 94.2%; Average loss: 2.6236 Iteration: 3770; Percent complete: 94.2%; Average loss: 2.6418 Iteration: 3771; Percent complete: 94.3%; Average loss: 2.5025 Iteration: 3772; Percent complete: 94.3%; Average loss: 2.6619 Iteration: 3773; Percent complete: 94.3%; Average loss: 2.6335 Iteration: 3774; Percent complete: 94.3%; Average loss: 2.4925 Iteration: 3775; Percent complete: 94.4%; Average loss: 2.9345 Iteration: 3776; Percent complete: 94.4%; Average loss: 2.5474 Iteration: 3777; Percent complete: 94.4%; Average loss: 2.8251 Iteration: 3778; Percent complete: 94.5%; Average loss: 2.7957 Iteration: 3779; Percent complete: 94.5%; Average loss: 2.5869 Iteration: 3780; Percent complete: 94.5%; Average loss: 2.7586 Iteration: 3781; Percent complete: 94.5%; Average loss: 2.7409 Iteration: 3782; Percent complete: 94.5%; Average loss: 2.5650 Iteration: 3783; Percent complete: 94.6%; Average loss: 2.4700 Iteration: 3784; Percent complete: 94.6%; Average loss: 2.5777 Iteration: 3785; Percent complete: 94.6%; Average loss: 2.7086 Iteration: 3786; Percent complete: 94.7%; Average loss: 2.8533 Iteration: 3787; Percent complete: 94.7%; Average loss: 2.7960 Iteration: 3788; Percent complete: 94.7%; Average loss: 2.7753 Iteration: 3789; Percent complete: 94.7%; Average loss: 2.7169 Iteration: 3790; Percent complete: 94.8%; Average loss: 2.6647 Iteration: 3791; Percent complete: 94.8%; Average loss: 2.4406 Iteration: 3792; Percent complete: 94.8%; Average loss: 2.8725 Iteration: 3793; Percent complete: 94.8%; Average loss: 2.7423 Iteration: 3794; Percent complete: 94.8%; Average loss: 2.7362 Iteration: 3795; Percent complete: 94.9%; Average loss: 2.7047 Iteration: 3796; Percent complete: 94.9%; Average loss: 2.6596 Iteration: 3797; Percent complete: 94.9%; Average loss: 2.6177 Iteration: 3798; Percent complete: 95.0%; Average loss: 2.7913 Iteration: 3799; Percent complete: 95.0%; Average loss: 2.5664 Iteration: 3800; Percent complete: 95.0%; Average loss: 2.7985 Iteration: 3801; Percent complete: 95.0%; Average loss: 2.8107 Iteration: 3802; Percent complete: 95.0%; Average loss: 2.5076 Iteration: 3803; Percent complete: 95.1%; Average loss: 2.5694 Iteration: 3804; Percent complete: 95.1%; Average loss: 2.8326 Iteration: 3805; Percent complete: 95.1%; Average loss: 3.0160 Iteration: 3806; Percent complete: 95.2%; Average loss: 2.7577 Iteration: 3807; Percent complete: 95.2%; Average loss: 2.8535 Iteration: 3808; Percent complete: 95.2%; Average loss: 2.6516 Iteration: 3809; Percent complete: 95.2%; Average loss: 2.4931 Iteration: 3810; Percent complete: 95.2%; Average loss: 2.8360 Iteration: 3811; Percent complete: 95.3%; Average loss: 2.4985 Iteration: 3812; Percent complete: 95.3%; Average loss: 2.6262 Iteration: 3813; Percent complete: 95.3%; Average loss: 2.5713 Iteration: 3814; Percent complete: 95.3%; Average loss: 2.6727 Iteration: 3815; Percent complete: 95.4%; Average loss: 2.6371 Iteration: 3816; Percent complete: 95.4%; Average loss: 2.6200 Iteration: 3817; Percent complete: 95.4%; Average loss: 2.7166 Iteration: 3818; Percent complete: 95.5%; Average loss: 2.6101 Iteration: 3819; Percent complete: 95.5%; Average loss: 2.6033 Iteration: 3820; Percent complete: 95.5%; Average loss: 2.4690 Iteration: 3821; Percent complete: 95.5%; Average loss: 2.4535 Iteration: 3822; Percent complete: 95.5%; Average loss: 2.6459 Iteration: 3823; Percent complete: 95.6%; Average loss: 2.4576 Iteration: 3824; Percent complete: 95.6%; Average loss: 2.7140 Iteration: 3825; Percent complete: 95.6%; Average loss: 2.8547 Iteration: 3826; Percent complete: 95.7%; Average loss: 2.6647 Iteration: 3827; Percent complete: 95.7%; Average loss: 2.5651 Iteration: 3828; Percent complete: 95.7%; Average loss: 2.5511 Iteration: 3829; Percent complete: 95.7%; Average loss: 2.5379 Iteration: 3830; Percent complete: 95.8%; Average loss: 2.5148 Iteration: 3831; Percent complete: 95.8%; Average loss: 2.8949 Iteration: 3832; Percent complete: 95.8%; Average loss: 2.9202 Iteration: 3833; Percent complete: 95.8%; Average loss: 2.6520 Iteration: 3834; Percent complete: 95.9%; Average loss: 2.9162 Iteration: 3835; Percent complete: 95.9%; Average loss: 2.6477 Iteration: 3836; Percent complete: 95.9%; Average loss: 2.5821 Iteration: 3837; Percent complete: 95.9%; Average loss: 2.7010 Iteration: 3838; Percent complete: 96.0%; Average loss: 2.7861 Iteration: 3839; Percent complete: 96.0%; Average loss: 2.3830 Iteration: 3840; Percent complete: 96.0%; Average loss: 2.4505 Iteration: 3841; Percent complete: 96.0%; Average loss: 2.4834 Iteration: 3842; Percent complete: 96.0%; Average loss: 2.7752 Iteration: 3843; Percent complete: 96.1%; Average loss: 2.7874 Iteration: 3844; Percent complete: 96.1%; Average loss: 2.6470 Iteration: 3845; Percent complete: 96.1%; Average loss: 2.7094 Iteration: 3846; Percent complete: 96.2%; Average loss: 2.8321 Iteration: 3847; Percent complete: 96.2%; Average loss: 2.5950 Iteration: 3848; Percent complete: 96.2%; Average loss: 2.5313 Iteration: 3849; Percent complete: 96.2%; Average loss: 2.5697 Iteration: 3850; Percent complete: 96.2%; Average loss: 2.7248 Iteration: 3851; Percent complete: 96.3%; Average loss: 2.8858 Iteration: 3852; Percent complete: 96.3%; Average loss: 2.5924 Iteration: 3853; Percent complete: 96.3%; Average loss: 2.4824 Iteration: 3854; Percent complete: 96.4%; Average loss: 2.7563 Iteration: 3855; Percent complete: 96.4%; Average loss: 2.5544 Iteration: 3856; Percent complete: 96.4%; Average loss: 2.8276 Iteration: 3857; Percent complete: 96.4%; Average loss: 2.6789 Iteration: 3858; Percent complete: 96.5%; Average loss: 2.5489 Iteration: 3859; Percent complete: 96.5%; Average loss: 2.4872 Iteration: 3860; Percent complete: 96.5%; Average loss: 2.6739 Iteration: 3861; Percent complete: 96.5%; Average loss: 2.6259 Iteration: 3862; Percent complete: 96.5%; Average loss: 2.6771 Iteration: 3863; Percent complete: 96.6%; Average loss: 2.6298 Iteration: 3864; Percent complete: 96.6%; Average loss: 2.4877 Iteration: 3865; Percent complete: 96.6%; Average loss: 2.4793 Iteration: 3866; Percent complete: 96.7%; Average loss: 2.5334 Iteration: 3867; Percent complete: 96.7%; Average loss: 2.5881 Iteration: 3868; Percent complete: 96.7%; Average loss: 2.7914 Iteration: 3869; Percent complete: 96.7%; Average loss: 2.7482 Iteration: 3870; Percent complete: 96.8%; Average loss: 2.5204 Iteration: 3871; Percent complete: 96.8%; Average loss: 2.9644 Iteration: 3872; Percent complete: 96.8%; Average loss: 2.7536 Iteration: 3873; Percent complete: 96.8%; Average loss: 2.7830 Iteration: 3874; Percent complete: 96.9%; Average loss: 2.5103 Iteration: 3875; Percent complete: 96.9%; Average loss: 2.5307 Iteration: 3876; Percent complete: 96.9%; Average loss: 2.8924 Iteration: 3877; Percent complete: 96.9%; Average loss: 2.3825 Iteration: 3878; Percent complete: 97.0%; Average loss: 2.7185 Iteration: 3879; Percent complete: 97.0%; Average loss: 2.7126 Iteration: 3880; Percent complete: 97.0%; Average loss: 2.6158 Iteration: 3881; Percent complete: 97.0%; Average loss: 2.6617 Iteration: 3882; Percent complete: 97.0%; Average loss: 2.7695 Iteration: 3883; Percent complete: 97.1%; Average loss: 2.3859 Iteration: 3884; Percent complete: 97.1%; Average loss: 2.5619 Iteration: 3885; Percent complete: 97.1%; Average loss: 2.7183 Iteration: 3886; Percent complete: 97.2%; Average loss: 2.6067 Iteration: 3887; Percent complete: 97.2%; Average loss: 2.8150 Iteration: 3888; Percent complete: 97.2%; Average loss: 2.5061 Iteration: 3889; Percent complete: 97.2%; Average loss: 2.8488 Iteration: 3890; Percent complete: 97.2%; Average loss: 2.5974 Iteration: 3891; Percent complete: 97.3%; Average loss: 2.3153 Iteration: 3892; Percent complete: 97.3%; Average loss: 2.5077 Iteration: 3893; Percent complete: 97.3%; Average loss: 2.6756 Iteration: 3894; Percent complete: 97.4%; Average loss: 2.5850 Iteration: 3895; Percent complete: 97.4%; Average loss: 2.6636 Iteration: 3896; Percent complete: 97.4%; Average loss: 2.5337 Iteration: 3897; Percent complete: 97.4%; Average loss: 2.4928 Iteration: 3898; Percent complete: 97.5%; Average loss: 2.4670 Iteration: 3899; Percent complete: 97.5%; Average loss: 2.6208 Iteration: 3900; Percent complete: 97.5%; Average loss: 2.4857 Iteration: 3901; Percent complete: 97.5%; Average loss: 2.6232 Iteration: 3902; Percent complete: 97.5%; Average loss: 2.3196 Iteration: 3903; Percent complete: 97.6%; Average loss: 2.6998 Iteration: 3904; Percent complete: 97.6%; Average loss: 2.7784 Iteration: 3905; Percent complete: 97.6%; Average loss: 2.6382 Iteration: 3906; Percent complete: 97.7%; Average loss: 2.7444 Iteration: 3907; Percent complete: 97.7%; Average loss: 2.4437 Iteration: 3908; Percent complete: 97.7%; Average loss: 2.5895 Iteration: 3909; Percent complete: 97.7%; Average loss: 2.7487 Iteration: 3910; Percent complete: 97.8%; Average loss: 2.5696 Iteration: 3911; Percent complete: 97.8%; Average loss: 2.3751 Iteration: 3912; Percent complete: 97.8%; Average loss: 2.6313 Iteration: 3913; Percent complete: 97.8%; Average loss: 2.4385 Iteration: 3914; Percent complete: 97.9%; Average loss: 2.6739 Iteration: 3915; Percent complete: 97.9%; Average loss: 2.5534 Iteration: 3916; Percent complete: 97.9%; Average loss: 2.5971 Iteration: 3917; Percent complete: 97.9%; Average loss: 2.6539 Iteration: 3918; Percent complete: 98.0%; Average loss: 2.7738 Iteration: 3919; Percent complete: 98.0%; Average loss: 2.7905 Iteration: 3920; Percent complete: 98.0%; Average loss: 2.6005 Iteration: 3921; Percent complete: 98.0%; Average loss: 2.7011 Iteration: 3922; Percent complete: 98.0%; Average loss: 2.6675 Iteration: 3923; Percent complete: 98.1%; Average loss: 2.5989 Iteration: 3924; Percent complete: 98.1%; Average loss: 2.4730 Iteration: 3925; Percent complete: 98.1%; Average loss: 2.5999 Iteration: 3926; Percent complete: 98.2%; Average loss: 2.5449 Iteration: 3927; Percent complete: 98.2%; Average loss: 2.5523 Iteration: 3928; Percent complete: 98.2%; Average loss: 2.4939 Iteration: 3929; Percent complete: 98.2%; Average loss: 2.8243 Iteration: 3930; Percent complete: 98.2%; Average loss: 2.5227 Iteration: 3931; Percent complete: 98.3%; Average loss: 2.4937 Iteration: 3932; Percent complete: 98.3%; Average loss: 2.6554 Iteration: 3933; Percent complete: 98.3%; Average loss: 2.4112 Iteration: 3934; Percent complete: 98.4%; Average loss: 2.6032 Iteration: 3935; Percent complete: 98.4%; Average loss: 2.7783 Iteration: 3936; Percent complete: 98.4%; Average loss: 2.4733 Iteration: 3937; Percent complete: 98.4%; Average loss: 2.7943 Iteration: 3938; Percent complete: 98.5%; Average loss: 2.4264 Iteration: 3939; Percent complete: 98.5%; Average loss: 2.5071 Iteration: 3940; Percent complete: 98.5%; Average loss: 2.4390 Iteration: 3941; Percent complete: 98.5%; Average loss: 2.6198 Iteration: 3942; Percent complete: 98.6%; Average loss: 2.5190 Iteration: 3943; Percent complete: 98.6%; Average loss: 2.5755 Iteration: 3944; Percent complete: 98.6%; Average loss: 2.6877 Iteration: 3945; Percent complete: 98.6%; Average loss: 2.8593 Iteration: 3946; Percent complete: 98.7%; Average loss: 2.4996 Iteration: 3947; Percent complete: 98.7%; Average loss: 2.5589 Iteration: 3948; Percent complete: 98.7%; Average loss: 2.7479 Iteration: 3949; Percent complete: 98.7%; Average loss: 2.7100 Iteration: 3950; Percent complete: 98.8%; Average loss: 2.4685 Iteration: 3951; Percent complete: 98.8%; Average loss: 2.5500 Iteration: 3952; Percent complete: 98.8%; Average loss: 2.6099 Iteration: 3953; Percent complete: 98.8%; Average loss: 2.5038 Iteration: 3954; Percent complete: 98.9%; Average loss: 2.6208 Iteration: 3955; Percent complete: 98.9%; Average loss: 2.8314 Iteration: 3956; Percent complete: 98.9%; Average loss: 2.2110 Iteration: 3957; Percent complete: 98.9%; Average loss: 2.4868 Iteration: 3958; Percent complete: 99.0%; Average loss: 2.6955 Iteration: 3959; Percent complete: 99.0%; Average loss: 2.5612 Iteration: 3960; Percent complete: 99.0%; Average loss: 2.4668 Iteration: 3961; Percent complete: 99.0%; Average loss: 2.5574 Iteration: 3962; Percent complete: 99.1%; Average loss: 2.5377 Iteration: 3963; Percent complete: 99.1%; Average loss: 2.6399 Iteration: 3964; Percent complete: 99.1%; Average loss: 2.7651 Iteration: 3965; Percent complete: 99.1%; Average loss: 2.5125 Iteration: 3966; Percent complete: 99.2%; Average loss: 2.4469 Iteration: 3967; Percent complete: 99.2%; Average loss: 2.3458 Iteration: 3968; Percent complete: 99.2%; Average loss: 2.7726 Iteration: 3969; Percent complete: 99.2%; Average loss: 2.6165 Iteration: 3970; Percent complete: 99.2%; Average loss: 2.5754 Iteration: 3971; Percent complete: 99.3%; Average loss: 2.7007 Iteration: 3972; Percent complete: 99.3%; Average loss: 2.6754 Iteration: 3973; Percent complete: 99.3%; Average loss: 2.4252 Iteration: 3974; Percent complete: 99.4%; Average loss: 2.5681 Iteration: 3975; Percent complete: 99.4%; Average loss: 2.7314 Iteration: 3976; Percent complete: 99.4%; Average loss: 2.2092 Iteration: 3977; Percent complete: 99.4%; Average loss: 2.6783 Iteration: 3978; Percent complete: 99.5%; Average loss: 2.6930 Iteration: 3979; Percent complete: 99.5%; Average loss: 2.7062 Iteration: 3980; Percent complete: 99.5%; Average loss: 2.3956 Iteration: 3981; Percent complete: 99.5%; Average loss: 2.6064 Iteration: 3982; Percent complete: 99.6%; Average loss: 2.3988 Iteration: 3983; Percent complete: 99.6%; Average loss: 2.7274 Iteration: 3984; Percent complete: 99.6%; Average loss: 2.7848 Iteration: 3985; Percent complete: 99.6%; Average loss: 2.6043 Iteration: 3986; Percent complete: 99.7%; Average loss: 2.6494 Iteration: 3987; Percent complete: 99.7%; Average loss: 2.3975 Iteration: 3988; Percent complete: 99.7%; Average loss: 2.5235 Iteration: 3989; Percent complete: 99.7%; Average loss: 2.5946 Iteration: 3990; Percent complete: 99.8%; Average loss: 2.5036 Iteration: 3991; Percent complete: 99.8%; Average loss: 2.4589 Iteration: 3992; Percent complete: 99.8%; Average loss: 2.6150 Iteration: 3993; Percent complete: 99.8%; Average loss: 2.6354 Iteration: 3994; Percent complete: 99.9%; Average loss: 2.4424 Iteration: 3995; Percent complete: 99.9%; Average loss: 2.4758 Iteration: 3996; Percent complete: 99.9%; Average loss: 2.4074 Iteration: 3997; Percent complete: 99.9%; Average loss: 2.3635 Iteration: 3998; Percent complete: 100.0%; Average loss: 2.5658 Iteration: 3999; Percent complete: 100.0%; Average loss: 2.5141 Iteration: 4000; Percent complete: 100.0%; Average loss: 2.7084 .. GENERATED FROM PYTHON SOURCE LINES 1342-1347 Run Evaluation ~~~~~~~~~~~~~~ To chat with your model, run the following block. .. GENERATED FROM PYTHON SOURCE LINES 1347-1359 .. code-block:: default # Set dropout layers to ``eval`` mode encoder.eval() decoder.eval() # Initialize search module searcher = GreedySearchDecoder(encoder, decoder) # Begin chatting (uncomment and run the following line to begin) # evaluateInput(encoder, decoder, searcher, voc) .. GENERATED FROM PYTHON SOURCE LINES 1360-1372 Conclusion ---------- That’s all for this one, folks. Congratulations, you now know the fundamentals to building a generative chatbot model! If you’re interested, you can try tailoring the chatbot’s behavior by tweaking the model and training parameters and customizing the data that you train the model on. Check out the other tutorials for more cool deep learning applications in PyTorch! .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 2 minutes 17.834 seconds) .. _sphx_glr_download_beginner_chatbot_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: chatbot_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: chatbot_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_