当前位置: 首页 > news >正文

怎么在国外网站开发客户深圳优化seo

怎么在国外网站开发客户,深圳优化seo,网站空间有哪几种类型,宁波网站建设方案联系方式大模型的出现和发展得益于增长的数据量、计算能力的提升以及算法优化等因素。这些模型在各种任务中展现出惊人的性能,比如自然语言处理、计算机视觉、语音识别等。这种模型通常采用深度神经网络结构,如 Transformer、BERT、GPT( Generative P…
  • 大模型的出现和发展得益于增长的数据量、计算能力的提升以及算法优化等因素。这些模型在各种任务中展现出惊人的性能,比如自然语言处理、计算机视觉、语音识别等。这种模型通常采用深度神经网络结构,如 TransformerBERTGPT( Generative Pre-trained Transformer )等。大模型的优势在于其能够捕捉和理解数据中更为复杂、抽象的特征和关系。通过大规模参数的学习,它们可以提高在各种任务上的泛化能力,并在未经过大量特定领域数据训练的情况下实现较好的表现。然而,大模型也面临着一些挑战,比如巨大的计算资源需求、高昂的训练成本、对大规模数据的依赖以及模型的可解释性等问题。因此,大模型的应用和发展也需要在性能、成本和道德等多个方面进行权衡和考量。InternLM-7B 包含了一个拥有 70 亿参数的基础模型和一个为实际场景量身定制的对话模型。该模型具有以下特点:1,利用数万亿的高质量 token 进行训练,建立了一个强大的知识库;2.支持 8k token 的上下文窗口长度,使得输入序列更长并增强了推理能力。基于 InternLM 训练框架,上海人工智能实验室已经发布了两个开源的预训练模型:InternLM-7B 和 InternLM-20B

  • InternLM 是一个开源的轻量级训练框架,旨在支持大模型训练而无需大量的依赖。通过单一的代码库,它支持在拥有数千个 GPU 的大型集群上进行预训练,并在单个 GPU 上进行微调,同时实现了卓越的性能优化。在 1024GPU 上训练时,InternLM 可以实现近 90% 的加速效率。基于 InternLM 训练框架,上海人工智能实验室已经发布了两个开源的预训练模型:InternLM-7BInternLM-20BLagent 是一个轻量级、开源的基于大语言模型的智能体(agent)框架,支持用户快速地将一个大语言模型转变为多种类型的智能体,并提供了一些典型工具为大语言模型赋能。通过 Lagent 框架可以更好的发挥 InternLM 的全部性能。

  • 7B demo 的训练配置文件样例如下:

    • JOB_NAME = "7b_train"
      SEQ_LEN = 2048
      HIDDEN_SIZE = 4096
      NUM_ATTENTION_HEAD = 32
      MLP_RATIO = 8 / 3
      NUM_LAYER = 32
      VOCAB_SIZE = 103168
      MODEL_ONLY_FOLDER = "local:llm_ckpts/xxxx"
      # Ckpt folder format:
      # fs: 'local:/mnt/nfs/XXX'
      SAVE_CKPT_FOLDER = "local:llm_ckpts"
      LOAD_CKPT_FOLDER = "local:llm_ckpts/49"
      # boto3 Ckpt folder format:
      # import os
      # BOTO3_IP = os.environ["BOTO3_IP"] # boto3 bucket endpoint
      # SAVE_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm"
      # LOAD_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm/snapshot/1/"
      CHECKPOINT_EVERY = 50
      ckpt = dict(enable_save_ckpt=False,  # enable ckpt save.save_ckpt_folder=SAVE_CKPT_FOLDER,  # Path to save training ckpt.# load_ckpt_folder=LOAD_CKPT_FOLDER, # Ckpt path to resume training(load weights and scheduler/context states).# load_model_only_folder=MODEL_ONLY_FOLDER, # Path to initialize with given model weights.load_optimizer=True,  # Wheter to load optimizer states when continuing training.checkpoint_every=CHECKPOINT_EVERY,async_upload=True,  # async ckpt upload. (only work for boto3 ckpt)async_upload_tmp_folder="/dev/shm/internlm_tmp_ckpt/",  # path for temporarily files during asynchronous upload.snapshot_ckpt_folder="/".join([SAVE_CKPT_FOLDER, "snapshot"]),  # directory for snapshot ckpt storage path.oss_snapshot_freq=int(CHECKPOINT_EVERY / 2),  # snapshot ckpt save frequency.
      )
      TRAIN_FOLDER = "/path/to/dataset"
      VALID_FOLDER = "/path/to/dataset"
      data = dict(seq_len=SEQ_LEN,# micro_num means the number of micro_batch contained in one gradient updatemicro_num=4,# packed_length = micro_bsz * SEQ_LENmicro_bsz=2,# defaults to the value of micro_numvalid_micro_num=4,# defaults to 0, means disable evaluatevalid_every=50,pack_sample_into_one=False,total_steps=50000,skip_batches="",rampup_batch_size="",# Datasets with less than 50 rows will be discardedmin_length=50,# train_folder=TRAIN_FOLDER,# valid_folder=VALID_FOLDER,
      )
      grad_scaler = dict(fp16=dict(# the initial loss scale, defaults to 2**16initial_scale=2**16,# the minimum loss scale, defaults to Nonemin_scale=1,# the number of steps to increase loss scale when no overflow occursgrowth_interval=1000,),# the multiplication factor for increasing loss scale, defaults to 2growth_factor=2,# the multiplication factor for decreasing loss scale, defaults to 0.5backoff_factor=0.5,# the maximum loss scale, defaults to Nonemax_scale=2**24,# the number of overflows before decreasing loss scale, defaults to 2hysteresis=2,
      )
      hybrid_zero_optimizer = dict(# Enable low_level_optimzer overlap_communicationoverlap_sync_grad=True,overlap_sync_param=True,# bucket size for nccl communication paramsreduce_bucket_size=512 * 1024 * 1024,# grad clippingclip_grad_norm=1.0,
      )
      loss = dict(label_smoothing=0,
      )
      adam = dict(lr=1e-4,adam_beta1=0.9,adam_beta2=0.95,adam_beta2_c=0,adam_eps=1e-8,weight_decay=0.01,
      )lr_scheduler = dict(total_steps=data["total_steps"],init_steps=0,  # optimizer_warmup_stepwarmup_ratio=0.01,eta_min=1e-5,last_epoch=-1,
      )beta2_scheduler = dict(init_beta2=adam["adam_beta2"],c=adam["adam_beta2_c"],cur_iter=-1,
      )model = dict(checkpoint=False,  # The proportion of layers for activation aheckpointing, the optional value are True/False/[0-1]num_attention_heads=NUM_ATTENTION_HEAD,embed_split_hidden=True,vocab_size=VOCAB_SIZE,embed_grad_scale=1,parallel_output=True,hidden_size=HIDDEN_SIZE,num_layers=NUM_LAYER,mlp_ratio=MLP_RATIO,apply_post_layer_norm=False,dtype="torch.float16",  # Support: "torch.float16", "torch.half", "torch.bfloat16", "torch.float32", "torch.tf32"norm_type="rmsnorm",layer_norm_epsilon=1e-5,use_flash_attn=True,num_chunks=1,  # if num_chunks > 1, interleaved pipeline scheduler is used.
      )
      """
      zero1 parallel:1. if zero1 <= 0, The size of the zero process group is equal to the size of the dp process group,so parameters will be divided within the range of dp.2. if zero1 == 1, zero is not used, and all dp groups retain the full amount of model parameters.3. zero1 > 1 and zero1 <= dp world size, the world size of zero is a subset of dp world size.For smaller models, it is usually a better choice to split the parameters within nodes with a setting <= 8.
      pipeline parallel (dict):1. size: int, the size of pipeline parallel.2. interleaved_overlap: bool, enable/disable communication overlap when using interleaved pipeline scheduler.
      tensor parallel: tensor parallel size, usually the number of GPUs per node.
      """
      parallel = dict(zero1=8,pipeline=dict(size=1, interleaved_overlap=True),sequence_parallel=False,
      )cudnn_deterministic = False
      cudnn_benchmark = False
      
  • 30B demo 训练配置文件样例如下:

    • JOB_NAME = "30b_train"
      SEQ_LEN = 2048
      HIDDEN_SIZE = 6144
      NUM_ATTENTION_HEAD = 48
      MLP_RATIO = 8 / 3
      NUM_LAYER = 60
      VOCAB_SIZE = 103168
      MODEL_ONLY_FOLDER = "local:llm_ckpts/xxxx"
      # Ckpt folder format:
      # fs: 'local:/mnt/nfs/XXX'
      SAVE_CKPT_FOLDER = "local:llm_ckpts"
      LOAD_CKPT_FOLDER = "local:llm_ckpts/49"
      # boto3 Ckpt folder format:
      # import os
      # BOTO3_IP = os.environ["BOTO3_IP"] # boto3 bucket endpoint
      # SAVE_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm"
      # LOAD_CKPT_FOLDER = f"boto3:s3://model_weights.{BOTO3_IP}/internlm/snapshot/1/"
      CHECKPOINT_EVERY = 50
      ckpt = dict(enable_save_ckpt=False,  # enable ckpt save.save_ckpt_folder=SAVE_CKPT_FOLDER,  # Path to save training ckpt.# load_ckpt_folder=LOAD_CKPT_FOLDER, # Ckpt path to resume training(load weights and scheduler/context states).# load_model_only_folder=MODEL_ONLY_FOLDER, # Path to initialize with given model weights.load_optimizer=True,  # Wheter to load optimizer states when continuing training.checkpoint_every=CHECKPOINT_EVERY,async_upload=True,  # async ckpt upload. (only work for boto3 ckpt)async_upload_tmp_folder="/dev/shm/internlm_tmp_ckpt/",  # path for temporarily files during asynchronous upload.snapshot_ckpt_folder="/".join([SAVE_CKPT_FOLDER, "snapshot"]),  # directory for snapshot ckpt storage path.oss_snapshot_freq=int(CHECKPOINT_EVERY / 2),  # snapshot ckpt save frequency.
      )
      TRAIN_FOLDER = "/path/to/dataset"
      VALID_FOLDER = "/path/to/dataset"
      data = dict(seq_len=SEQ_LEN,# micro_num means the number of micro_batch contained in one gradient updatemicro_num=4,# packed_length = micro_bsz * SEQ_LENmicro_bsz=2,# defaults to the value of micro_numvalid_micro_num=4,# defaults to 0, means disable evaluatevalid_every=50,pack_sample_into_one=False,total_steps=50000,skip_batches="",rampup_batch_size="",# Datasets with less than 50 rows will be discardedmin_length=50,# train_folder=TRAIN_FOLDER,# valid_folder=VALID_FOLDER,
      )
      grad_scaler = dict(fp16=dict(# the initial loss scale, defaults to 2**16initial_scale=2**16,# the minimum loss scale, defaults to Nonemin_scale=1,# the number of steps to increase loss scale when no overflow occursgrowth_interval=1000,),# the multiplication factor for increasing loss scale, defaults to 2growth_factor=2,# the multiplication factor for decreasing loss scale, defaults to 0.5backoff_factor=0.5,# the maximum loss scale, defaults to Nonemax_scale=2**24,# the number of overflows before decreasing loss scale, defaults to 2hysteresis=2,
      )
      hybrid_zero_optimizer = dict(# Enable low_level_optimzer overlap_communicationoverlap_sync_grad=True,overlap_sync_param=True,# bucket size for nccl communication paramsreduce_bucket_size=512 * 1024 * 1024,# grad clippingclip_grad_norm=1.0,
      )
      loss = dict(label_smoothing=0,
      )
      adam = dict(lr=1e-4,adam_beta1=0.9,adam_beta2=0.95,adam_beta2_c=0,adam_eps=1e-8,weight_decay=0.01,
      )lr_scheduler = dict(total_steps=data["total_steps"],init_steps=0,  # optimizer_warmup_stepwarmup_ratio=0.01,eta_min=1e-5,last_epoch=-1,
      )
      beta2_scheduler = dict(init_beta2=adam["adam_beta2"],c=adam["adam_beta2_c"],cur_iter=-1,
      )model = dict(checkpoint=False,  # The proportion of layers for activation aheckpointing, the optional value are True/False/[0-1]num_attention_heads=NUM_ATTENTION_HEAD,embed_split_hidden=True,vocab_size=VOCAB_SIZE,embed_grad_scale=1,parallel_output=True,hidden_size=HIDDEN_SIZE,num_layers=NUM_LAYER,mlp_ratio=MLP_RATIO,apply_post_layer_norm=False,dtype="torch.float16",  # Support: "torch.float16", "torch.half", "torch.bfloat16", "torch.float32", "torch.tf32"norm_type="rmsnorm",layer_norm_epsilon=1e-5,use_flash_attn=True,num_chunks=1,  # if num_chunks > 1, interleaved pipeline scheduler is used.
      )
      """
      zero1 parallel:1. if zero1 <= 0, The size of the zero process group is equal to the size of the dp process group,so parameters will be divided within the range of dp.2. if zero1 == 1, zero is not used, and all dp groups retain the full amount of model parameters.3. zero1 > 1 and zero1 <= dp world size, the world size of zero is a subset of dp world size.For smaller models, it is usually a better choice to split the parameters within nodes with a setting <= 8.
      pipeline parallel (dict):1. size: int, the size of pipeline parallel.2. interleaved_overlap: bool, enable/disable communication overlap when using interleaved pipeline scheduler.
      tensor parallel: tensor parallel size, usually the number of GPUs per node.
      """
      parallel = dict(zero1=-1,tensor=4,pipeline=dict(size=1, interleaved_overlap=True),sequence_parallel=False,
      )
      cudnn_deterministic = False
      cudnn_benchmark = False
      

30B Demo — InternLM 0.2.0 文档


文章转载自:
http://overpeople.c7629.cn
http://airometer.c7629.cn
http://ecclesiasticus.c7629.cn
http://oxygenate.c7629.cn
http://sot.c7629.cn
http://beguiling.c7629.cn
http://heathenize.c7629.cn
http://irrepatriable.c7629.cn
http://trailbreaker.c7629.cn
http://izzat.c7629.cn
http://frenzy.c7629.cn
http://dendrite.c7629.cn
http://syphon.c7629.cn
http://ostend.c7629.cn
http://thickness.c7629.cn
http://jacksie.c7629.cn
http://mitogenesis.c7629.cn
http://aware.c7629.cn
http://cer.c7629.cn
http://galanty.c7629.cn
http://smallpox.c7629.cn
http://illuvium.c7629.cn
http://slipstream.c7629.cn
http://housemistress.c7629.cn
http://tsouris.c7629.cn
http://potentiostat.c7629.cn
http://undissolved.c7629.cn
http://cardiectomy.c7629.cn
http://crinotoxin.c7629.cn
http://geosynchronous.c7629.cn
http://electrocircuit.c7629.cn
http://pyrocellulose.c7629.cn
http://sfz.c7629.cn
http://fricando.c7629.cn
http://toxicity.c7629.cn
http://germon.c7629.cn
http://rasophore.c7629.cn
http://serigraph.c7629.cn
http://ambassadress.c7629.cn
http://papilloedema.c7629.cn
http://pontify.c7629.cn
http://diminishingly.c7629.cn
http://socko.c7629.cn
http://numb.c7629.cn
http://nathaniel.c7629.cn
http://lyons.c7629.cn
http://acknowledgement.c7629.cn
http://lunule.c7629.cn
http://bowing.c7629.cn
http://bawdily.c7629.cn
http://astrologous.c7629.cn
http://picking.c7629.cn
http://luetin.c7629.cn
http://avitrice.c7629.cn
http://grammy.c7629.cn
http://catachrestically.c7629.cn
http://unadmired.c7629.cn
http://kilojoule.c7629.cn
http://dehydration.c7629.cn
http://lactescence.c7629.cn
http://unifiable.c7629.cn
http://berliozian.c7629.cn
http://crosstrees.c7629.cn
http://lungwort.c7629.cn
http://catmint.c7629.cn
http://horoscopy.c7629.cn
http://dudder.c7629.cn
http://helpfully.c7629.cn
http://haemolytic.c7629.cn
http://jumbly.c7629.cn
http://norepinephrine.c7629.cn
http://aesthetic.c7629.cn
http://ruche.c7629.cn
http://peon.c7629.cn
http://printery.c7629.cn
http://burden.c7629.cn
http://ornithine.c7629.cn
http://flicflac.c7629.cn
http://pendulous.c7629.cn
http://whacked.c7629.cn
http://overfired.c7629.cn
http://canape.c7629.cn
http://corncob.c7629.cn
http://nc.c7629.cn
http://lawyerly.c7629.cn
http://sprigtail.c7629.cn
http://dermis.c7629.cn
http://longer.c7629.cn
http://unison.c7629.cn
http://despond.c7629.cn
http://cardan.c7629.cn
http://storeroom.c7629.cn
http://flabellate.c7629.cn
http://entoil.c7629.cn
http://anamorphosis.c7629.cn
http://ephelis.c7629.cn
http://beamed.c7629.cn
http://anglewing.c7629.cn
http://cursing.c7629.cn
http://scribble.c7629.cn
http://www.zhongyajixie.com/news/89677.html

相关文章:

  • 有哪些搜索引擎网站域名解析ip地址
  • 个人备案网站 论坛无锡网站建设
  • 中级经济师考试题型seo营销工具
  • 湛江企业自助建站关键词搜索引擎又称为
  • wordpress字体怎么改优化设计卷子答案
  • 婚纱影楼网站建设怎么联系百度客服人工服务
  • 教育系统网站备案国际新闻界
  • 公司做网站需准备资料电商培训机构有哪些?哪家比较好
  • 微博营销的方法和手段行者seo
  • 有教做鱼骨图的网站吗网站推广与优化方案
  • 拨号服务器做网站nat123百度一下百度
  • 企业网站管理的含义推广免费
  • 中山 网站建设开发代写文章接单平台
  • 淘宝商家网站建设什么建站程序最利于seo
  • 政府网站和政务新媒体建设管理办法网络销售怎么聊客户
  • 网站浮窗制作深圳白帽优化
  • dedecms网站制作教程seo关键词排名优化矩阵系统
  • 数字博物馆网站建设我想做网络推广找谁
  • 郑州网站推广信息免费b2b网站推广
  • 佛山市桂城建设局网站微信广告投放平台
  • 东莞企业年检哪个网站做软文撰写案例
  • 织梦网站建设博客网站查询入口
  • 中小微企业查询平台优化网站做什么的
  • 上海网站建设排名公司哪家好南宁seo平台标准
  • 淘宝网站做多久百度旧版本下载
  • 如何在网站标题加logo网站域名在哪里查询
  • 做网站用lunx网站seo推广排名
  • 个人网站建设完整教程广告留电话号的网站
  • 建设网站出现400错误seo关键词优化排名推广
  • 网站版权文字seo搜论坛