mastodonien.de

mastodon.online

Zeitpunkt              Nutzer    Delta   Tröts        TNR     Titel                     Version  maxTL
Fr 26.07.2024 00:00:03   191.331      -1    9.290.684    48,6 Mastodon                  4.3.0...   500
Do 25.07.2024 00:00:01   191.332      -2    9.280.821    48,5 Mastodon                  4.3.0...   500
Mi 24.07.2024 00:00:01   191.334     -76    9.270.150    48,5 Mastodon                  4.3.0...   500
Di 23.07.2024 00:00:02   191.410     -56    9.260.118    48,4 Mastodon                  4.3.0...   500
Mo 22.07.2024 00:00:05   191.466      -2    9.249.533    48,3 Mastodon                  4.3.0...   500
So 21.07.2024 00:00:02   191.468      -2    9.240.281    48,3 Mastodon                  4.3.0...   500
Sa 20.07.2024 00:00:03   191.470       0    9.231.235    48,2 Mastodon                  4.3.0...   500
Fr 19.07.2024 13:57:40   191.470      -7    9.224.706    48,2 Mastodon                  4.3.0...   500
Do 18.07.2024 00:00:04   191.477      -2    9.210.826    48,1 Mastodon                  4.3.0...   500
Mi 17.07.2024 00:00:03   191.479       0    9.202.399    48,1 Mastodon                  4.3.0...   500

Fr 26.07.2024 21:35

🥳I've just published a new Python library for everyone who trains models: Loss Watch 📊

It allows you to check out your model's progress from within the progress indicator!

If you want to clean up your Jupyter notebooks, give it a try!

Link: github.com/Libric0/loss_watch

A screenshot of the readme of the repository, explaining the usage of the library. It reads:

Usage

Similarly to tqdm, loss-watch plots act as the iterable you loop through while training your model. The simplest way of using it is as follows

from loss_watch import LossProgressBar

epochs = 100
for epoch, update in LossProgressBar(epochs):
    # Perform your models training step and retrieve a float `loss`
    train_loss = train_step()
    update(train_loss)

It does not really matter how you get your training loss here, any float works. This will give you a plot that looks something like this: Image of the progress indigator.

As you can see, your highest loss is displayed in red, and the lowest in a light cyan.

Important! Your progress bar expects an update on the training loss after every step. Otherwise, your visuals might get weird. After all, why wouldn't you supply the training loss in every step?

A screenshot of the readme of the repository, explaining the usage of the library. It reads: Usage Similarly to tqdm, loss-watch plots act as the iterable you loop through while training your model. The simplest way of using it is as follows from loss_watch import LossProgressBar epochs = 100 for epoch, update in LossProgressBar(epochs): # Perform your models training step and retrieve a float `loss` train_loss = train_step() update(train_loss) It does not really matter how you get your training loss here, any float works. This will give you a plot that looks something like this: Image of the progress indigator. As you can see, your highest loss is displayed in red, and the lowest in a light cyan. Important! Your progress bar expects an update on the training loss after every step. Otherwise, your visuals might get weird. After all, why wouldn't you supply the training loss in every step?

Screenshot of the Validation explaination. Instead of a single progress bar, it displays 3. Each shows the loss on the training or validation set.

Screenshot of the Validation explaination. Instead of a single progress bar, it displays 3. Each shows the loss on the training or validation set.

[Öffentlich] Antw.: 0 Wtrl.: 0 Fav.: 0 · via Web

Antw. · Weiterl. · Fav. · Lesez. · Pin · Stumm · Löschen