byobu

byobu/tmux是screen的增强版, 敲入byobu命令直接创建和恢复会话,其他快捷键可与screen兼容

安装byobu:

sudo apt install byobu

新建一个byobu会话:

byobu new -s

关闭一个byobu会话:

byobu kill-sesssion -t

恢复某个byobu会话:

byobu a -t

查看有哪些byobu会话:

byobu ls

常用快捷键
shortcut usage
F2 Create a new window 打开一个新的窗口
F3 Move to the previous window 进入前一个窗口
F4 Move to the next window 进入后一个窗口
F5 Refresh all status notifications
F6 Detach from the session and logout 断开链接
F7 Enter copy/scrollback mode 进入scrollback模式
F8 Rename the current window
F9 Launch the Byobu Configuration Menu
F12 Lock this terminal
Alt-Pageup Scroll back through this window’s history
Alt-Pagedown Scroll forward through this window’s history
shift - F1 帮助
Shift-F2 Split the screen horizontally 横切割新建个窗口
Shift-F3 Move focus to the next split
Shift-F4 Move focus to the previous split
Shift-F6 Detach from the session, but do not logout
Shift-F5 Collapse all splits
Shift+F11 最大化其中一个窗口
Shift-F12 Toggle all of Byobu’s keybindings on or off
Ctrl+F2 Split the screen vertically 竖切割新建个窗口
Ctrl+F5 Reconnect any SSH/GPG sockets or agents
Ctrl+F6 Kill window
F7 scrollback模式下操作

I hit F7 to enter scrollback mode,
Space to start selecting,
g to scroll to the top of the buffer (thanks @GeorgeMarian)
Enter to copy (to byobu’s clipboard, not a terminal/system one),
then cat > my-byobu-dump.txt in the terminal,
Alt+Insert or ctrl+A,] to paste (again, from byobu’s clipboard)
Ctrl+D to close the file.

refer

# !/bin/bash
# Usage: force_cmd.sh pull/push dir max_tries
cd $2
if [ ! $3 ]
thenmax=10
elsemax=$3
fi
echo "max=$max"num=0
#git $1# error code 128: unable to access
while [[ $num -le $max ]]
doerror=0if ! git $1; then error=1; finum=$[$num+1]echo $numif [ $error = 0 ]thenexitfi
done
# !/bin/bash
# Usage:
#     alias fgit='bash ...force_git.sh'
# fgit [max_tries] args...
if ! [[ $1 =~ ^-?[0-9]+$ ]]
thenmax=10cmd=${@:1}
elsemax=$1cmd=${@:2}
fi
echo "max=$max"
echo "cmd=git $cmd"
num=0# error code 128: unable to access
while [[ $num -le $max ]]
doerror=0if ! git $cmd; then error=1; finum=$[$num+1]echo $numif [ $error = 0 ]thenexitfi
done
https://oldpan.me/archives/pytorch-to-use-multiple-gpus
https://zhuanlan.zhihu.com/p/86441879
https://zhuanlan.zhihu.com/p/234293510
https://zhuanlan.zhihu.com/p/145427849level set , water shed , point cloud, convex optim , unsupervised , deform
#  Copyright (c) 2020. The Medical Image Computing (MIC) Lab, 陶豪毅
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
#  limitations under the License.from typing import Union, List
import torch
import torch.nn as nnfrom medtk.model.nnModules import ComponentModule, BlockModule
from medtk.model.nd import MaxPoolNd
from medtk.model.blocks import ConvNormAct, VResConvNormAct, \BasicBlockNd, BottleneckNd, \SEBasicBlockNd, SEBottleneckNdclass ConvLayer(BlockModule):def __init__(self, dim, in_channels, out_channels, stride, num_blocks, groups=1, base_width=64, dilation=1):super().__init__()self.blocks = []for i in range(num_blocks):if i == 0:self.blocks.append(ConvNormAct(dim, in_channels, out_channels, kernel_size=3, padding=1, stride=stride))else:self.blocks.append(ConvNormAct(dim, out_channels, out_channels, kernel_size=3, padding=1))self.blocks.append(ConvNormAct(dim, out_channels, out_channels, kernel_size=3, padding=1))for i, m in enumerate(self.blocks):self.add_module(str(i), m)def forward(self, x):for layer in self.blocks:x = layer(x)return xclass VResConvLayer(BlockModule):def __init__(self, dim, in_channels, out_channels, stride, num_blocks, groups=1, base_width=64, dilation=1):super().__init__()self.blocks = []for i in range(num_blocks):if i == 0:self.blocks.append(ConvNormAct(dim, in_channels, out_channels, kernel_size=3, padding=1, stride=stride))else:self.blocks.append(VResConvNormAct(dim, out_channels, out_channels, kernel_size=3, padding=1))for i, m in enumerate(self.blocks):self.add_module(str(i), m)def forward(self, x):for layer in self.blocks:x = layer(x)return xclass ResidualLayer(BlockModule):BLOCK = BasicBlockNddef __init__(self, dim, in_channels, out_channels, stride, num_blocks, groups=1, base_width=64, dilation=1):super(ResidualLayer, self).__init__()downsample = Noneif stride != 1 or in_channels != out_channels * self.BLOCK.expansion:downsample = nn.Sequential(self.build_conv(dim, in_channels, out_channels * self.BLOCK.expansion,kernel_size=1, stride=stride, bias=False),self.build_norm(dim, out_channels * self.BLOCK.expansion),)self.blocks = nn.ModuleList([self.BLOCK(dim,in_planes=in_channels,planes=out_channels,stride=stride,dilation=1,downsample=downsample,groups=1,width_per_group=64)])in_planes = out_channels * self.BLOCK.expansionfor i in range(1, num_blocks):self.blocks.append(self.BLOCK(dim,in_planes=in_planes,planes=out_channels,stride=1,dilation=1,groups=1,width_per_group=64))for i, m in enumerate(self.blocks):self.add_module(str(i), m)def forward(self, x):for i in range(len(self.blocks)):layer = getattr(self, str(i))x = layer(x)return xclass ResidualBottleneckLayer(ResidualLayer):BLOCK = BottleneckNddef __init__(self, dim, in_channels, out_channels, stride, num_blocks, groups=1, base_width=64, dilation=1):out_channels = out_channels // self.BLOCK.expansionsuper(ResidualBottleneckLayer, self).__init__(dim, in_channels, out_channels,stride, num_blocks, groups, base_width, dilation)class SEResidualLayer(ResidualLayer):BLOCK = SEBasicBlockNddef __init__(self, dim, in_channels, out_channels, stride, num_blocks, groups=1, base_width=64, dilation=1):super(SEResidualLayer, self).__init__(dim, in_channels, out_channels,stride, num_blocks, groups, base_width, dilation)class SEResidualBottleneckLayer(ResidualLayer):BLOCK = SEBottleneckNddef __init__(self, dim, in_channels, out_channels, stride, num_blocks, groups=1, base_width=64, dilation=1):out_channels = out_channels // self.BLOCK.expansionsuper(SEResidualBottleneckLayer, self).__init__(dim, in_channels, out_channels,stride, num_blocks, groups, base_width, dilation)class Encoder(ComponentModule):"""support list:- Vanilla UNet- ResNet- ResNeXt-"""LAYERS = {'conv': (ConvLayer, 1),  # UNet, VNet'v_conv': (VResConvLayer, 1),  # VBNet'res': (ResidualLayer, 1),  # ResNet 18, 34'b_res': (ResidualBottleneckLayer, 4),  # ResNet or ResNeXt ge than 50'se_res': (SEResidualLayer, 1),  # SEResNet 18, 34'se_b_res': (SEResidualBottleneckLayer, 4),}def __init__(self,dim: int,in_channels: int,features=(16, 32, 64, 128),strides=(1, 2, 2, 2),dilations=(1, 1, 1, 1),num_blocks=(1, 1, 1, 1),out_indices=(0, 1, 2, 3),layer_type='conv',groups=1,width_per_group=64,first_conv=(64, 7, 1),downsample=False):super(Encoder, self).__init__()assert isinstance(out_indices, (list, tuple)), \'out_indices must be a list/tuple but get a {}'.format(type(out_indices))assert max(out_indices) < len(strides), "max out_index must smaller than stages"assert len(strides) == len(num_blocks) == len(features)assert layer_type in self.LAYERS.keys()self.first_features, self.first_kernel, self.first_stride = first_convself.downsample = downsampleself.dim = dimself.in_channels = in_channelsself.features = featuresself.strides = stridesself.dilations = dilationsself.num_blocks = num_blocksself.out_indices = out_indicesself.stages = len(self.strides)self.groups = groupsself.width_per_group = width_per_groupself.layer_type = layer_typeself.layer, self.expansion = self.LAYERS[layer_type]self.conv1 = self.build_conv(dim, self.in_channels,self.first_features,kernel_size=self.first_kernel,stride=self.first_stride,padding=self.first_kernel // 2,bias='res' not in self.layer_type)self.bn1 = self.build_norm(self.dim, self.first_features)self.relu = self.build_act()self.maxpool = MaxPoolNd(self.dim)(kernel_size=3, stride=2, padding=1)self.layers = self.init_layers()# self.init_weights()def init_layers(self):layers = nn.ModuleList()in_planes = self.first_featuresfor i in range(self.stages):layer_name = 'layer{}'.format(i + 1)layer = self.layer(self.dim,in_planes,self.features[i],stride=self.strides[i],num_blocks=self.num_blocks[i],groups=self.groups,base_width=self.width_per_group,dilation=self.dilations[i])in_planes = self.features[i]self.add_module(layer_name, layer)layers.append(layer)return layersdef init_weights(self):for m in self.modules():if self.is_conv(self.dim, m):nn.init.kaiming_normal_(m.weight, 1e-2)if m.bias is not None:nn.init.constant_(m.bias, 0)elif self.is_norm(self.dim, m):nn.init.normal_(m.weight, 1.0, 0.02)m.bias.data.zero_()def forward(self, inputs):x = self.conv1(inputs)x = self.bn1(x)x = self.relu(x)if self.downsample:x = self.maxpool(x)outs = []for i in range(self.stages):layer_name = 'layer{}'.format(i + 1)layer = getattr(self, layer_name)x = layer(x)if i in self.out_indices:outs.append(x)return outsif __name__ == "__main__":def init_seed(SEED):torch.manual_seed(SEED)torch.cuda.manual_seed_all(SEED)torch.backends.cudnn.deterministic = Truetorch.backends.cudnn.benchmark = Falseinit_seed(666)# UNet = Encoder(#     dim=2,#     in_channels=3,#     features=(32, 64, 128, 256),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(1, 1, 1, 1),#     out_indices=(0, 1, 2, 3),#     layer_type='conv'# )# model = UNet# VNet = Encoder(#     dim=2,#     in_channels=3,#     features=(32, 64, 128, 256),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(1, 2, 3, 4),#     out_indices=(0, 1, 2, 3),#     layer_type='v_conv'# )# model = VNet# TVNet = Encoder(#     dim=2,#     in_channels=3,#     features=(32, 64, 128, 256),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(2, 2, 2, 2),#     out_indices=(0, 1, 2, 3),#     layer_type='v_conv'# )# model = TVNet# ResNet18 = Encoder(#     dim=2,#     in_channels=3,#     features=(64, 128, 256, 512),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(2, 2, 2, 2),#     out_indices=(0, 1, 2, 3),#     first_conv=(64, 7, 2),#     layer_type='res',#     downsample=True# )# model = ResNet18# ResNet34 = Encoder(#     dim=2,#     in_channels=3,#     features=(64, 128, 256, 512),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(3, 4, 6, 3),#     out_indices=(0, 1, 2, 3),#     first_conv=(64, 7, 2),#     layer_type='res',#     downsample=True# )# model = ResNet34# ResNet50 = Encoder(#     dim=2,#     in_channels=3,#     features=(256, 512, 1024, 2048),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(3, 4, 6, 3),#     out_indices=(0, 1, 2, 3),#     first_conv=(64, 7, 2),#     layer_type='b_res',#     downsample=True# )# model = ResNet50# ResNeXt50_32x4 = Encoder(#     dim=2,#     in_channels=3,#     features=(256, 512, 1024, 2048),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(3, 4, 6, 3),#     out_indices=(0, 1, 2, 3),#     groups=32,#     width_per_group=4,#     first_conv=(64, 7, 2),#     layer_type='b_res',#     downsample=True# )# model = ResNeXt50_32x4# SEResNet50 = Encoder(#     dim=2,#     in_channels=3,#     features=(256, 512, 1024, 2048),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(3, 4, 6, 3),#     out_indices=(0, 1, 2, 3),#     first_conv=(64, 7, 2),#     layer_type='se_b_res',#     downsample=True# )# model = SEResNet50# SEResNet50 = Encoder(#     dim=2,#     in_channels=3,#     features=(256, 512, 1024, 2048),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(3, 4, 6, 3),#     out_indices=(0, 1, 2, 3),#     first_conv=(64, 7, 2),#     layer_type='se_b_res',#     downsample=True# )# model = SEResNet50# DeeplungResNet18 = Encoder(#     dim=3,#     in_channels=1,#     features=(32, 64, 64, 64),#     strides=(1, 2, 2, 2),#     dilations=(1, 1, 1, 1),#     num_blocks=(2, 2, 3, 3),#     out_indices=(0, 1, 2, 3),#     first_conv=(24, 7, 2),#     layer_type='res',#     groups=32,#     width_per_group=4,# )# model = DeeplungResNet18ResNet18 = Encoder(dim=3,in_channels=1,features=(16, 32, 64, 128),strides=(1, 2, 2, 2),dilations=(1, 1, 1, 1),num_blocks=(2, 2, 2, 2),out_indices=(0, 1, 2, 3),first_conv=(64, 7, 2),layer_type='res',downsample=True)model = ResNet18print(model)model.print_model_params()data = torch.ones((1, 1, 96, 96, 96))outs = model(data)for o in outs:print(o.shape)print(torch.sum(o))from medtk.runner.checkpoint import load_checkpoint# load_checkpoint(model, 'https://download.pytorch.org/models/resnet18-5c106cde.pth')# load_checkpoint(model, 'https://download.pytorch.org/models/resnet34-333f7ec4.pth')# load_checkpoint(model, 'https://download.pytorch.org/models/resnet50-19c8e357.pth')# load_checkpoint(model, 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth')# load_checkpoint(model, 'http://data.lip6.fr/cadene/pretrainedmodels/se_resnet50-ce0d4300.pth')

Byobu 使用技巧相关推荐

  1. keyshot怎么批量渲染_提高Keyshot逼真渲染的小技巧

    Keyshot是一个特别神奇的应用软件,但是,就像Photoshop一样,如果你不知道怎么使用它,那么再优秀的工具在你手中也什么都是了.这里我就告诉你一些制作优秀效果图的技巧以及如何使用这个神奇软件. ...

  2. Linux shell 学习笔记(6)— vim 编辑器使用方法及技巧

    1. 检查 vim 软件包 1.1 CentOS 发行版 $ alias vi alias vi='vim' $ $ which vim /usr/bin/vim $ $ ls -l /usr/bin ...

  3. Python 笔试面试及常用技巧 (1)

    1. 交换两个数字 In [66]: x, y = 1, 2In [67]: x Out[67]: 1In [68]: y Out[68]: 2 赋值的右侧形成了一个新的元组,左侧立即解析(unpac ...

  4. Redis 使用技巧

    Redis 现在非常受欢迎,似乎已经成为内存数据存储行业的标准.本人结合平时使用Redis经验,也同时查找了一些网上别人的总结经验,总结以下几条Redis使用技巧. 1. 停止使用 KEYS 众所周知 ...

  5. 受用一生的高效 PyCharm 使用技巧(六)

    http://www.sohu.com/a/329854019_654419 大家好,今天我又来给大家更新 PyCharm 的使用技巧. 从第一篇开始,一直到本篇,一共更新了6篇文章,每篇 5 个小技 ...

  6. 受用一生的高效 PyCharm 使用技巧(四)

    https://blog.csdn.net/pdcfighting/article/details/93269028 大家好,距离最近一篇 PyCharm 使用技巧的文章已经过去一月有余,最近虽然也比 ...

  7. 受用一生的高效 PyCharm 使用技巧(二)pycharm 指定参数运行文件

    https://mp.weixin.qq.com/s/Ii0-qHUXayTPb-K-17hmQQ 在介绍技巧之前,有些话想声明一下,这个系列的一些小技巧,对于一些重试用户来说可能是小 case,如果 ...

  8. 受用一生的高效 PyCharm 使用技巧(一)

    声明:本文章转自 返回主页Python编程时光 PyCharm 是大多数 Python 开发者的首选 IDE,每天我们都在上面敲着熟悉的代码,写出一个又一个奇妙的功能. https://www.cnb ...

  9. PyCharm 使用技巧

    PyCharm 使用技巧 2018.12.15 00:26:36 字数 1034 阅读 290 JetBrains家的IDE很多技巧是通用的,说一些自己日常用得多但不一定仅限于PyCharm的技巧: ...

最新文章

  1. k-近邻算法之距离度量
  2. Android短信拦截
  3. 【AOP 面向切面编程】Android Studio 中配置 AspectJ ( 下载并配置AS中 jar 包 | 配置 Gradle 和 Gradle 插件版本 | 配置 Gradle 构建脚本 )
  4. ASP.NET页面刷新的几种实现方法
  5. Sql Server定时自动备份数据库
  6. 图像识别最新赛事!总奖金31万,一起组队吗?
  7. MySQL Server 5.0安装教程
  8. C# winform 窗体接收命令行参数自动登录进行系统,模拟600个WCF客户端的并发压力测试...
  9. 第一个Appium脚本
  10. C语言 十进制和十六进制相互转换 - C语言零基础入门教程
  11. ES 处理日志字段超出 1000 引发的报错
  12. 企业级无线无缝漫游之思创漫游3.0 Plus ,三层漫游
  13. 拼插机器人课和围棋课_乐高机器人玩具与机器人教育有什么区别?
  14. 设计模式-手机生产-抽象工厂模式
  15. 【观察】易捷行云EasyStack:以可进化的新一代云平台,攀登云计算的“卡瓦格博峰”...
  16. 帝国CMS7.2 手机网站使用教程
  17. 利用dpdk rte_ring实现进程间通信
  18. 计算机信息检索的常用位置算符有,计算机信息检索基本算符?含义
  19. 我放弃了年薪20万offer,挑战自动化测试(一)
  20. 男女混浴,有人耍流氓了~~

热门文章

  1. fstream的使用和打开时存在中文中文路径可能失败的问题,其他中文问题都可以尝试采用如下方法
  2. arduino动态刷新显示_来看看ROG电竞显示器吧
  3. HttpOnly 标志——保护 Cookie 免受 XSS
  4. 岁月沧桑,你要学会爱自己
  5. 新品上架初期怎样做好搜索流量初期,为后期打造店铺爆款做铺垫
  6. 最新分享美团面试总结:1000+超全面试题
  7. dependenciesManagement 和dependencies 解释
  8. 用OKR工作法让2023年的自己的薪酬翻倍
  9. 【C语言】输入字符串,将字符串逆转
  10. 嵌入式Linux移植dropbear