找回密码
 立即注册

QQ登录

只需一步,快速开始

老冷培训班汇总介绍老冷付费工具汇总介绍老鬼UI编程学院EasyClick所有产品简介EasyClick官方交流群
IOS授权价格IOS/安卓 自助提卡链接安卓-中控群控-电脑授权-购买IOSusb版投屏群控教程IOS脱机版激活教程
IOS脱机版wifi局域网脚本中控教程IOS脱机版wifi局域网群控投屏教程远程调试frp,兼容安卓/IOS热更新工具,兼容安卓/IOS脱机版老冷网盘
查看: 507|回复: 1

[教程] 【YOLO】YOLOv8 train训练参数详解

[复制链接] |主动推送

1043

主题

129

回帖

4万

积分

管理员

【导师】

积分
40057
最后登录
2024-10-22
在线时间
1586 小时

兔年勋章金牌银牌铜牌导师微信认证热心会员推广达人宣传达人灌水之王突出贡献优秀版主荣誉管理论坛元老QQ认证EC VIP学员

发表于 2024-8-22 11:48:36 | 显示全部楼层 |阅读模式

马上注册,解锁更多高级玩法

您需要 登录 才可以下载或查看,没有账号?立即注册

×
对于我们主要关注两点batch和workers,还有个device可用可不用
batch:主要对应每批次导入几个图片,影响速度,默认16
workers:线程数,默认8,对于win系统的GPU版本,必须设置为0,不然报错,cpu版本无视
device:对应cpu和gpu,默认gpu,如果未安装cuda则自动选cpu

一些解释:https://blog.csdn.net/qq_37553692/article/details/130898732

官网:https://docs.ultralytics.com/modes/train/#train-settings
Argument
Default
Description
model
None
Specifies the model file for training. Accepts a path to either a .pt pretrained model or a .yaml configuration file. Essential for defining the model structure or initializing weights.
data
None
Path to the dataset configuration file (e.g., coco8.yaml). This file contains dataset-specific parameters, including paths to training and validation data, class names, and number of classes.
epochs
100
Total number of training epochs. Each epoch represents a full pass over the entire dataset. Adjusting this value can affect training duration and model performance.
time
None
Maximum training time in hours. If set, this overrides the epochs argument, allowing training to automatically stop after the specified duration. Useful for time-constrained training scenarios.
patience
100
Number of epochs to wait without improvement in validation metrics before early stopping the training. Helps prevent overfitting by stopping training when performance plateaus.
batch
16
Batch size, with three modes: set as an integer (e.g., batch=16), auto mode for 60% GPU memory utilization (batch=-1), or auto mode with specified utilization fraction (batch=0.70).
imgsz
640
Target image size for training. All images are resized to this dimension before being fed into the model. Affects model accuracy and computational complexity.
save
True
Enables saving of training checkpoints and final model weights. Useful for resuming training or model deployment.
save_period
-1
Frequency of saving model checkpoints, specified in epochs. A value of -1 disables this feature. Useful for saving interim models during long training sessions.
cache
False
Enables caching of dataset images in memory (True/ram), on disk (disk), or disables it (False). Improves training speed by reducing disk I/O at the cost of increased memory usage.
device
None
Specifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps).
workers
8
Number of worker threads for data loading (per RANK if Multi-GPU training). Influences the speed of data preprocessing and feeding into the model, especially useful in multi-GPU setups.
project
None
Name of the project directory where training outputs are saved. Allows for organized storage of different experiments.
name
None
Name of the training run. Used for creating a subdirectory within the project folder, where training logs and outputs are stored.
exist_ok
False
If True, allows overwriting of an existing project/name directory. Useful for iterative experimentation without needing to manually clear previous outputs.
pretrained
True
Determines whether to start training from a pretrained model. Can be a boolean value or a string path to a specific model from which to load weights. Enhances training efficiency and model performance.
optimizer
'auto'
Choice of optimizer for training. Options include SGD, Adam, AdamW, NAdam, RAdam, RMSProp etc., or auto for automatic selection based on model configuration. Affects convergence speed and stability.
verbose
False
Enables verbose output during training, providing detailed logs and progress updates. Useful for debugging and closely monitoring the training process.
seed
0
Sets the random seed for training, ensuring reproducibility of results across runs with the same configurations.
deterministic
True
Forces deterministic algorithm use, ensuring reproducibility but may affect performance and speed due to the restriction on non-deterministic algorithms.
single_cls
False
Treats all classes in multi-class datasets as a single class during training. Useful for binary classification tasks or when focusing on object presence rather than classification.
rect
False
Enables rectangular training, optimizing batch composition for minimal padding. Can improve efficiency and speed but may affect model accuracy.
cos_lr
False
Utilizes a cosine learning rate scheduler, adjusting the learning rate following a cosine curve over epochs. Helps in managing learning rate for better convergence.
close_mosaic
10
Disables mosaic data augmentation in the last N epochs to stabilize training before completion. Setting to 0 disables this feature.
resume
False
Resumes training from the last saved checkpoint. Automatically loads model weights, optimizer state, and epoch count, continuing training seamlessly.
amp
True
Enables Automatic Mixed Precision (AMP) training, reducing memory usage and possibly speeding up training with minimal impact on accuracy.
fraction
1.0
Specifies the fraction of the dataset to use for training. Allows for training on a subset of the full dataset, useful for experiments or when resources are limited.
profile
False
Enables profiling of ONNX and TensorRT speeds during training, useful for optimizing model deployment.
freeze
None
Freezes the first N layers of the model or specified layers by index, reducing the number of trainable parameters. Useful for fine-tuning or transfer learning.
lr0
0.01
Initial learning rate (i.e. SGD=1E-2, Adam=1E-3) . Adjusting this value is crucial for the optimization process, influencing how rapidly model weights are updated.
lrf
0.01
Final learning rate as a fraction of the initial rate = (lr0 * lrf), used in conjunction with schedulers to adjust the learning rate over time.
momentum
0.937
Momentum factor for SGD or beta1 for Adam optimizers, influencing the incorporation of past gradients in the current update.
weight_decay
0.0005
L2 regularization term, penalizing large weights to prevent overfitting.
warmup_epochs
3.0
Number of epochs for learning rate warmup, gradually increasing the learning rate from a low value to the initial learning rate to stabilize training early on.
warmup_momentum
0.8
Initial momentum for warmup phase, gradually adjusting to the set momentum over the warmup period.
warmup_bias_lr
0.1
Learning rate for bias parameters during the warmup phase, helping stabilize model training in the initial epochs.
box
7.5
Weight of the box loss component in the loss function, influencing how much emphasis is placed on accurately predicting bounding box coordinates.
cls
0.5
Weight of the classification loss in the total loss function, affecting the importance of correct class prediction relative to other components.
dfl
1.5
Weight of the distribution focal loss, used in certain YOLO versions for fine-grained classification.
pose
12.0
Weight of the pose loss in models trained for pose estimation, influencing the emphasis on accurately predicting pose keypoints.
kobj
2.0
Weight of the keypoint objectness loss in pose estimation models, balancing detection confidence with pose accuracy.
label_smoothing
0.0
Applies label smoothing, softening hard labels to a mix of the target label and a uniform distribution over labels, can improve generalization.
nbs
64
Nominal batch size for normalization of loss.
overlap_mask
True
Determines whether segmentation masks should overlap during training, applicable in instance segmentation tasks.
mask_ratio
4
Downsample ratio for segmentation masks, affecting the resolution of masks used during training.
dropout
0.0
Dropout rate for regularization in classification tasks, preventing overfitting by randomly omitting units during training.
val
True
Enables validation during training, allowing for periodic evaluation of model performance on a separate dataset.
plots
False
Generates and saves plots of training and validation metrics, as well as prediction examples, providing visual insights into model performance and learning progression.


VIP培训班介绍汇总[安卓/IOSusb版/IOS脱机版/PHP/nodejs等]
http://bbs.laoleng.vip/forum.php?mod=viewthread&tid=428

付费工具介绍汇总[热更/远程调试/IOS易语言UI/拓展插件等]
http://bbs.laoleng.vip/forum.php?mod=viewthread&tid=430

0

主题

3

回帖

35

积分

萌新

积分
35
最后登录
2024-10-22
在线时间
0 小时
发表于 2024-10-9 18:30:50 | 显示全部楼层
6666666666666
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

想要力量吗骚年上一条 /2 下一条

关闭

免责声明|Archiver|手机版|老冷编程学院 ( 闽ICP备20013040号-2 )|网站地图

GMT+8, 2024-10-22 17:30

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

快速回复 返回顶部 返回列表