python batch_size_Python config.batch_size方法代码示例
本文整理汇总了Python中config.batch_size方法的典型用法代码示例。如果您正苦于以下问题:Python config.batch_size方法的具体用法?Python config.batch_size怎么用?Python config.batch_size使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块config的用法示例。
在下文中一共展示了config.batch_size方法的29个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。
示例1: get_loader
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def get_loader(train=False, val=False, test=False, trainval=False):
""" Returns a data loader for the desired split """
split = VQA(
utils.path_for(train=train, val=val, test=test, trainval=trainval, question=True),
utils.path_for(train=train, val=val, test=test, trainval=trainval, answer=True),
config.preprocessed_trainval_path if not test else config.preprocessed_test_path,
answerable_only=train or trainval,
dummy_answers=test,
)
loader = torch.utils.data.DataLoader(
split,
batch_size=config.batch_size,
shuffle=train or trainval, # only shuffle the data in training
pin_memory=True,
num_workers=config.data_workers,
collate_fn=collate_fn,
)
return loader
开发者ID:KaihuaTang,项目名称:VQA2.0-Recent-Approachs-2018.pytorch,代码行数:20,
示例2: __init__
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def __init__(self):
self.vocab = Vocab(config.vocab_path, config.vocab_size)
self.batcher = Batcher(config.train_data_path, self.vocab, mode='train',
batch_size=config.batch_size, single_pass=False)
time.sleep(5)
if not os.path.exists(config.log_root):
os.mkdir(config.log_root)
self.model_dir = os.path.join(config.log_root, 'train_model')
if not os.path.exists(self.model_dir):
os.mkdir(self.model_dir)
self.eval_log = os.path.join(config.log_root, 'eval_log')
if not os.path.exists(self.eval_log):
os.mkdir(self.eval_log)
self.summary_writer = tf.compat.v1.summary.FileWriter(self.eval_log)
开发者ID:wyu-du,项目名称:Reinforce-Paraphrase-Generation,代码行数:19,
示例3: adv_train_generator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adv_train_generator(self, g_step):
"""
The gen is trained by MLE-like objective.
"""
total_g_loss = 0
for step in range(g_step):
inp, target = GenDataIter.prepare(self.gen.sample(cfg.batch_size, cfg.batch_size), gpu=cfg.CUDA)
# ===Train===
rewards = self.get_mali_reward(target)
adv_loss = self.gen.adv_loss(inp, target, rewards)
self.optimize(self.gen_adv_opt, adv_loss)
total_g_loss += adv_loss.item()
# ===Test===
self.log.info('[ADV-GEN]: g_loss = %.4f, %s' % (total_g_loss, self.cal_metrics(fmt_str=True)))
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:18,
示例4: adv_train_generator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adv_train_generator(self, g_step):
total_loss = 0
for step in range(g_step):
real_samples = F.one_hot(self.oracle_data.random_batch()['target'], cfg.vocab_size).float()
gen_samples = self.gen.sample(cfg.batch_size, cfg.batch_size, one_hot=True)
if cfg.CUDA:
real_samples, gen_samples = real_samples.cuda(), gen_samples.cuda()
# ===Train===
d_out_real = self.dis(real_samples)
d_out_fake = self.dis(gen_samples)
g_loss, _ = get_losses(d_out_real, d_out_fake, cfg.loss_type)
self.optimize(self.gen_adv_opt, g_loss, self.gen)
total_loss += g_loss.item()
return total_loss / g_step if g_step != 0 else 0
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:19,
示例5: adv_train_discriminator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adv_train_discriminator(self, d_step):
total_loss = 0
for step in range(d_step):
real_samples = F.one_hot(self.oracle_data.random_batch()['target'], cfg.vocab_size).float()
gen_samples = self.gen.sample(cfg.batch_size, cfg.batch_size, one_hot=True)
if cfg.CUDA:
real_samples, gen_samples = real_samples.cuda(), gen_samples.cuda()
# ===Train===
d_out_real = self.dis(real_samples)
d_out_fake = self.dis(gen_samples)
_, d_loss = get_losses(d_out_real, d_out_fake, cfg.loss_type)
self.optimize(self.dis_opt, d_loss, self.dis)
total_loss += d_loss.item()
return total_loss / d_step if d_step != 0 else 0
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:19,
示例6: cal_metrics
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def cal_metrics(self, fmt_str=False):
"""
Calculate metrics
:param fmt_str: if return format string for logging
"""
with torch.no_grad():
# Prepare data for evaluation
gen_data = GenDataIter(self.gen.sample(cfg.samples_num, 4 * cfg.batch_size))
# Reset metrics
self.nll_oracle.reset(self.oracle, gen_data.loader)
self.nll_gen.reset(self.gen, self.oracle_data.loader)
self.nll_div.reset(self.gen, gen_data.loader)
if fmt_str:
return ', '.join(['%s = %s' % (metric.get_name(), metric.get_score()) for metric in self.all_metrics])
else:
return [metric.get_score() for metric in self.all_metrics]
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:20,
示例7: train_discriminator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def train_discriminator(self, d_step, d_epoch, phase='MLE'):
"""
Training the discriminator on real_data_samples (positive) and generated samples from gen (negative).
Samples are drawn d_step times, and the discriminator is trained for d_epoch d_epoch.
"""
# prepare loader for validate
global d_loss, train_acc
for step in range(d_step):
# prepare loader for training
pos_samples = self.train_data.target # not re-sample the Oracle data
neg_samples = self.gen.sample(cfg.samples_num, 4 * cfg.batch_size)
dis_data = DisDataIter(pos_samples, neg_samples)
for epoch in range(d_epoch):
# ===Train===
d_loss, train_acc = self.train_dis_epoch(self.dis, dis_data.loader, self.dis_criterion,
self.dis_opt)
# ===Test===
self.log.info('[%s-DIS] d_step %d: d_loss = %.4f, train_acc = %.4f,' % (
phase, step, d_loss, train_acc))
if cfg.if_save and not cfg.if_test:
torch.save(self.dis.state_dict(), cfg.pretrained_dis_path)
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:27,
示例8: adv_train_generator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adv_train_generator(self, g_step):
total_loss = 0
for step in range(g_step):
real_samples = self.train_data.random_batch()['target']
gen_samples = self.gen.sample(cfg.batch_size, cfg.batch_size, one_hot=True)
if cfg.CUDA:
real_samples, gen_samples = real_samples.cuda(), gen_samples.cuda()
real_samples = F.one_hot(real_samples, cfg.vocab_size).float()
# ===Train===
d_out_real = self.dis(real_samples)
d_out_fake = self.dis(gen_samples)
g_loss, _ = get_losses(d_out_real, d_out_fake, cfg.loss_type)
self.optimize(self.gen_adv_opt, g_loss, self.gen)
total_loss += g_loss.item()
return total_loss / g_step if g_step != 0 else 0
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:20,
示例9: adv_train_generator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adv_train_generator(self, g_step):
"""
The gen is trained using policy gradients, using the reward from the discriminator.
Training is done for num_batches batches.
"""
rollout_func = rollout.ROLLOUT(self.gen, cfg.CUDA)
total_g_loss = 0
for step in range(g_step):
inp, target = GenDataIter.prepare(self.gen.sample(cfg.batch_size, cfg.batch_size), gpu=cfg.CUDA)
# ===Train===
rewards = rollout_func.get_reward(target, cfg.rollout_num, self.dis)
adv_loss = self.gen.batchPGLoss(inp, target, rewards)
self.optimize(self.gen_adv_opt, adv_loss)
total_g_loss += adv_loss.item()
# ===Test===
self.log.info('[ADV-GEN]: g_loss = %.4f, %s' % (total_g_loss, self.cal_metrics(fmt_str=True)))
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:20,
示例10: train_discriminator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def train_discriminator(self, d_step, d_epoch, phase='MLE'):
"""
Training the discriminator on real_data_samples (positive) and generated samples from gen (negative).
Samples are drawn d_step times, and the discriminator is trained for d_epoch d_epoch.
"""
# prepare loader for validate
global d_loss, train_acc
for step in range(d_step):
# prepare loader for training
pos_samples = self.train_data.target
neg_samples = self.gen.sample(cfg.samples_num, 4 * cfg.batch_size)
dis_data = DisDataIter(pos_samples, neg_samples)
for epoch in range(d_epoch):
# ===Train===
d_loss, train_acc = self.train_dis_epoch(self.dis, dis_data.loader, self.dis_criterion,
self.dis_opt)
# ===Test===
self.log.info('[%s-DIS] d_step %d: d_loss = %.4f, train_acc = %.4f,' % (
phase, step, d_loss, train_acc))
if cfg.if_save and not cfg.if_test:
torch.save(self.dis.state_dict(), cfg.pretrained_dis_path)
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:26,
示例11: train_discriminator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def train_discriminator(self, d_step, d_epoch, phase='MLE'):
"""
Training the discriminator on real_data_samples (positive) and generated samples from gen (negative).
Samples are drawn d_step times, and the discriminator is trained for d_epoch d_epoch.
"""
d_loss, train_acc = 0, 0
for step in range(d_step):
# prepare loader for training
pos_samples = self.train_data.target
neg_samples = self.gen.sample(cfg.samples_num, cfg.batch_size, self.dis)
dis_data = DisDataIter(pos_samples, neg_samples)
for epoch in range(d_epoch):
# ===Train===
d_loss, train_acc = self.train_dis_epoch(self.dis, dis_data.loader, self.dis_criterion,
self.dis_opt)
# ===Test===
self.log.info('[%s-DIS] d_step %d: d_loss = %.4f, train_acc = %.4f,' % (
phase, step, d_loss, train_acc))
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:22,
示例12: cal_metrics
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def cal_metrics(self, fmt_str=False):
with torch.no_grad():
# Prepare data for evaluation
eval_samples = self.gen.sample(cfg.samples_num, cfg.batch_size, self.dis)
gen_data = GenDataIter(eval_samples)
gen_tokens = tensor_to_tokens(eval_samples, self.idx2word_dict)
gen_tokens_s = tensor_to_tokens(self.gen.sample(200, cfg.batch_size, self.dis), self.idx2word_dict)
# Reset metrics
self.bleu.reset(test_text=gen_tokens, real_text=self.test_data.tokens)
self.nll_gen.reset(self.gen, self.train_data.loader, leak_dis=self.dis)
self.nll_div.reset(self.gen, gen_data.loader, leak_dis=self.dis)
self.self_bleu.reset(test_text=gen_tokens_s, real_text=gen_tokens)
self.ppl.reset(gen_tokens)
if fmt_str:
return ', '.join(['%s = %s' % (metric.get_name(), metric.get_score()) for metric in self.all_metrics])
else:
return [metric.get_score() for metric in self.all_metrics]
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:21,
示例13: adv_train_generator
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adv_train_generator(self, g_step):
"""
The gen is trained using policy gradients, using the reward from the discriminator.
Training is done for num_batches batches.
"""
for i in range(cfg.k_label):
rollout_func = rollout.ROLLOUT(self.gen_list[i], cfg.CUDA)
total_g_loss = 0
for step in range(g_step):
inp, target = GenDataIter.prepare(self.gen_list[i].sample(cfg.batch_size, cfg.batch_size), gpu=cfg.CUDA)
# ===Train===
rewards = rollout_func.get_reward(target, cfg.rollout_num, self.dis, current_k=i)
adv_loss = self.gen_list[i].batchPGLoss(inp, target, rewards)
self.optimize(self.gen_opt_list[i], adv_loss)
total_g_loss += adv_loss.item()
# ===Test===
self.log.info('[ADV-GEN]: %s', self.comb_metrics(fmt_str=True))
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:21,
示例14: cal_metrics_with_label
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def cal_metrics_with_label(self, label_i):
assert type(label_i) == int, 'missing label'
with torch.no_grad():
# Prepare data for evaluation
eval_samples = self.gen_list[label_i].sample(cfg.samples_num, 8 * cfg.batch_size)
gen_data = GenDataIter(eval_samples)
gen_tokens = tensor_to_tokens(eval_samples, self.idx2word_dict)
gen_tokens_s = tensor_to_tokens(self.gen_list[label_i].sample(200, 200), self.idx2word_dict)
clas_data = CatClasDataIter([eval_samples], label_i)
# Reset metrics
self.bleu.reset(test_text=gen_tokens, real_text=self.test_data_list[label_i].tokens)
self.nll_gen.reset(self.gen_list[label_i], self.train_data_list[label_i].loader)
self.nll_div.reset(self.gen_list[label_i], gen_data.loader)
self.self_bleu.reset(test_text=gen_tokens_s, real_text=gen_tokens)
self.clas_acc.reset(self.clas, clas_data.loader)
self.ppl.reset(gen_tokens)
return [metric.get_score() for metric in self.all_metrics]
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:22,
示例15: cal_metrics_with_label
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def cal_metrics_with_label(self, label_i):
assert type(label_i) == int, 'missing label'
with torch.no_grad():
# Prepare data for evaluation
eval_samples = self.gen.sample(cfg.samples_num, 8 * cfg.batch_size, label_i=label_i)
gen_data = GenDataIter(eval_samples)
gen_tokens = tensor_to_tokens(eval_samples, self.idx2word_dict)
gen_tokens_s = tensor_to_tokens(self.gen.sample(200, 200, label_i=label_i), self.idx2word_dict)
clas_data = CatClasDataIter([eval_samples], label_i)
# Reset metrics
self.bleu.reset(test_text=gen_tokens, real_text=self.test_data_list[label_i].tokens)
self.nll_gen.reset(self.gen, self.train_data_list[label_i].loader, label_i)
self.nll_div.reset(self.gen, gen_data.loader, label_i)
self.self_bleu.reset(test_text=gen_tokens_s, real_text=gen_tokens)
self.clas_acc.reset(self.clas, clas_data.loader)
self.ppl.reset(gen_tokens)
return [metric.get_score() for metric in self.all_metrics]
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:22,
示例16: create_oracle
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def create_oracle():
"""Create a new Oracle model and Oracle's samples"""
import config as cfg
from models.Oracle import Oracle
print('Creating Oracle...')
oracle = Oracle(cfg.gen_embed_dim, cfg.gen_hidden_dim, cfg.vocab_size,
cfg.max_seq_len, cfg.padding_idx, gpu=cfg.CUDA)
if cfg.CUDA:
oracle = oracle.cuda()
torch.save(oracle.state_dict(), cfg.oracle_state_dict_path)
big_samples = oracle.sample(cfg.samples_num, 4 * cfg.batch_size)
# large
torch.save(big_samples, cfg.oracle_samples_path.format(cfg.samples_num))
# small
torch.save(oracle.sample(cfg.samples_num // 2, 4 * cfg.batch_size),
cfg.oracle_samples_path.format(cfg.samples_num // 2))
oracle_data = GenDataIter(big_samples)
mle_criterion = nn.NLLLoss()
groud_truth = NLL.cal_nll(oracle, oracle_data.loader, mle_criterion)
print('NLL_Oracle Groud Truth: %.4f' % groud_truth)
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:26,
示例17: pretrain_loss
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def pretrain_loss(self, target, dis, start_letter=cfg.start_letter):
"""
Returns the pretrain_generator Loss for predicting target sequence.
Inputs: target, dis, start_letter
- target: batch_size * seq_len
"""
batch_size, seq_len = target.size()
_, feature_array, goal_array, leak_out_array = self.forward_leakgan(target, dis, if_sample=False, no_log=False,
start_letter=start_letter)
# Manager loss
mana_cos_loss = self.manager_cos_loss(batch_size, feature_array,
goal_array) # batch_size * (seq_len / step_size)
manager_loss = -torch.sum(mana_cos_loss) / (batch_size * (seq_len // self.step_size))
# Worker loss
work_nll_loss = self.worker_nll_loss(target, leak_out_array) # batch_size * seq_len
work_loss = torch.sum(work_nll_loss) / (batch_size * seq_len)
return manager_loss, work_loss
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:24,
示例18: adversarial_loss
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def adversarial_loss(self, target, rewards, dis, start_letter=cfg.start_letter):
"""
Returns a pseudo-loss that gives corresponding policy gradients (on calling .backward()).
Inspired by the example in http://karpathy.github.io/2016/05/31/rl/
Inputs: target, rewards, dis, start_letter
- target: batch_size * seq_len
- rewards: batch_size * seq_len (discriminator rewards for each token)
"""
batch_size, seq_len = target.size()
_, feature_array, goal_array, leak_out_array = self.forward_leakgan(target, dis, if_sample=False, no_log=False,
start_letter=start_letter, train=True)
# Manager Loss
t0 = time.time()
mana_cos_loss = self.manager_cos_loss(batch_size, feature_array,
goal_array) # batch_size * (seq_len / step_size)
mana_loss = -torch.sum(rewards * mana_cos_loss) / (batch_size * (seq_len // self.step_size))
# Worker Loss
work_nll_loss = self.worker_nll_loss(target, leak_out_array) # batch_size * seq_len
work_cos_reward = self.worker_cos_reward(feature_array, goal_array) # batch_size * seq_len
work_loss = -torch.sum(work_nll_loss * work_cos_reward) / (batch_size * seq_len)
return mana_loss, work_loss
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:27,
示例19: worker_cos_reward
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def worker_cos_reward(self, feature_array, goal_array):
"""
Get reward for worker (cosine distance)
:return: cos_loss: batch_size * seq_len
"""
for i in range(int(self.max_seq_len / self.step_size)):
real_feature = feature_array[:, i * self.step_size, :].unsqueeze(1).expand((-1, self.step_size, -1))
feature_array[:, i * self.step_size:(i + 1) * self.step_size, :] = real_feature
if i > 0:
sum_goal = torch.sum(goal_array[:, (i - 1) * self.step_size:i * self.step_size, :], dim=1, keepdim=True)
else:
sum_goal = goal_array[:, 0, :].unsqueeze(1)
goal_array[:, i * self.step_size:(i + 1) * self.step_size, :] = sum_goal.expand((-1, self.step_size, -1))
offset_feature = feature_array[:, 1:, :] # f_{t+1}, batch_size * seq_len * goal_out_size
goal_array = goal_array[:, :self.max_seq_len, :] # batch_size * seq_len * goal_out_size
sub_feature = offset_feature - goal_array
# L2 normalization
sub_feature = F.normalize(sub_feature, p=2, dim=-1)
all_goal = F.normalize(goal_array, p=2, dim=-1)
cos_loss = F.cosine_similarity(sub_feature, all_goal, dim=-1) # batch_size * seq_len
return cos_loss
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:27,
示例20: step
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def step(self, inp, hidden):
"""
RelGAN step forward
:param inp: [batch_size]
:param hidden: memory size
:return: pred, hidden, next_token, next_token_onehot, next_o
- pred: batch_size * vocab_size, use for adversarial training backward
- hidden: next hidden
- next_token: [batch_size], next sentence token
- next_token_onehot: batch_size * vocab_size, not used yet
- next_o: batch_size * vocab_size, not used yet
"""
emb = self.embeddings(inp).unsqueeze(1)
out, hidden = self.lstm(emb, hidden)
gumbel_t = self.add_gumbel(self.lstm2out(out.squeeze(1)))
next_token = torch.argmax(gumbel_t, dim=1).detach()
# next_token_onehot = F.one_hot(next_token, cfg.vocab_size).float() # not used yet
next_token_onehot = None
pred = F.softmax(gumbel_t * self.temperature, dim=-1) # batch_size * vocab_size
# next_o = torch.sum(next_token_onehot * pred, dim=1) # not used yet
next_o = None
return pred, hidden, next_token, next_token_onehot, next_o
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:26,
示例21: sample
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def sample(self, num_samples, batch_size, start_letter=cfg.start_letter):
"""
Samples the network and returns num_samples samples of length max_seq_len.
:return samples: num_samples * max_seq_length (a sampled sequence in each row)
"""
num_batch = num_samples // batch_size + 1 if num_samples != batch_size else 1
samples = torch.zeros(num_batch * batch_size, self.max_seq_len).long()
# Generate sentences with multinomial sampling strategy
for b in range(num_batch):
hidden = self.init_hidden(batch_size)
inp = torch.LongTensor([start_letter] * batch_size)
if self.gpu:
inp = inp.cuda()
for i in range(self.max_seq_len):
out, hidden = self.forward(inp, hidden, need_hidden=True) # out: batch_size * vocab_size
next_token = torch.multinomial(torch.exp(out), 1) # batch_size * 1 (sampling from each row)
samples[b * batch_size:(b + 1) * batch_size, i] = next_token.view(-1)
inp = next_token.view(-1)
samples = samples[:num_samples]
return samples
开发者ID:williamSYSU,项目名称:TextGAN-PyTorch,代码行数:25,
示例22: __init__
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def __init__(self,features,
augmentations,
batch_size=batch_size,
input1=(image_size_h_p,image_size_w_p,nchannels),
input2=(image_size_h_c,image_size_w_c,nchannels),
type1=None,
metadata_dict=None,
metadata_length=0,
with_paths=False):
self.features = features
self.batch_size = batch_size
self.vec_size = input1
self.vec_size2 = input2
self.type = type1
self.metadata_dict = metadata_dict
self.metadata_length = metadata_length
self.augment = augmentations
self.with_paths = with_paths
开发者ID:icarofua,项目名称:vehicle-ReId,代码行数:20,
示例23: forward
点赞 6
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def forward(self, state, tau, num_quantiles):
input_size = state.size()[0] # batch_size(train) or 1(get_action)
tau = tau.expand(input_size * num_quantiles, quantile_embedding_dim)
pi_mtx = torch.Tensor(np.pi * np.arange(0, quantile_embedding_dim)).expand(input_size * num_quantiles, quantile_embedding_dim)
cos_tau = torch.cos(tau * pi_mtx)
phi = self.phi(cos_tau)
phi = F.relu(phi)
state_tile = state.expand(input_size, num_quantiles, self.num_inputs)
state_tile = state_tile.flatten().view(-1, self.num_inputs)
x = F.relu(self.fc1(state_tile))
x = self.fc2(x * phi)
z = x.view(-1, num_quantiles, self.num_outputs)
z = z.transpose(1, 2) # [input_size, num_output, num_quantile]
return z
开发者ID:g6ling,项目名称:Reinforcement-Learning-Pytorch-Cartpole,代码行数:20,
示例24: config_initialization
点赞 5
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def config_initialization():
# image shape and feature layers shape inference
image_shape = (FLAGS.train_image_height, FLAGS.train_image_width)
if not FLAGS.dataset_dir:
raise ValueError('You must supply the dataset directory with --dataset_dir')
tf.logging.set_verbosity(tf.logging.DEBUG)
util.init_logger(log_file = 'log_train_seglink_%d_%d.log'%image_shape, log_path = FLAGS.train_dir, stdout = False, mode = 'a')
config.init_config(image_shape,
batch_size = FLAGS.batch_size,
weight_decay = FLAGS.weight_decay,
num_gpus = FLAGS.num_gpus,
train_with_ignored = FLAGS.train_with_ignored,
seg_loc_loss_weight = FLAGS.seg_loc_loss_weight,
link_cls_loss_weight = FLAGS.link_cls_loss_weight,
)
batch_size = config.batch_size
batch_size_per_gpu = config.batch_size_per_gpu
tf.summary.scalar('batch_size', batch_size)
tf.summary.scalar('batch_size_per_gpu', batch_size_per_gpu)
util.proc.set_proc_name(FLAGS.model_name + '_' + FLAGS.dataset_name)
dataset = dataset_factory.get_dataset(FLAGS.dataset_name, FLAGS.dataset_split_name, FLAGS.dataset_dir)
config.print_config(FLAGS, dataset)
return dataset
开发者ID:dengdan,项目名称:seglink,代码行数:32,
示例25: config_initialization
点赞 5
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def config_initialization():
if not FLAGS.dataset_dir:
raise ValueError('You must supply the dataset directory with --dataset_dir')
tf.logging.set_verbosity(tf.logging.DEBUG)
# image shape and feature layers shape inference
image_shape = (FLAGS.train_image_height, FLAGS.train_image_width)
config.init_config(image_shape, batch_size = FLAGS.batch_size)
util.proc.set_proc_name(FLAGS.model_name + '_' + FLAGS.dataset_name)
dataset = dataset_factory.get_dataset(FLAGS.dataset_name, FLAGS.dataset_split_name, FLAGS.dataset_dir)
# config.print_config(FLAGS, dataset)
return dataset
开发者ID:dengdan,项目名称:seglink,代码行数:17,
示例26: main
点赞 5
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def main(_):
util.init_logger()
dump_path = util.io.get_absolute_path('~/temp/no-use/seglink/')
dataset = config_initialization()
batch_queue = create_dataset_batch_queue(dataset)
batch_size = config.batch_size
summary_op = tf.summary.merge_all()
with tf.Session() as sess:
tf.train.start_queue_runners(sess)
b_image, b_seg_label, b_seg_offsets, b_link_label = batch_queue.dequeue()
batch_idx = 0;
while True: #batch_idx < 50:
image_data_batch, seg_label_data_batch, seg_offsets_data_batch, link_label_data_batch = \
sess.run([b_image, b_seg_label, b_seg_offsets, b_link_label])
for image_idx in xrange(batch_size):
image_data = image_data_batch[image_idx, ...]
seg_label_data = seg_label_data_batch[image_idx, ...]
seg_offsets_data = seg_offsets_data_batch[image_idx, ...]
link_label_data = link_label_data_batch[image_idx, ...]
image_data = image_data + [123, 117, 104]
image_data = np.asarray(image_data, dtype = np.uint8)
# decode the encoded ground truth back to bboxes
bboxes = seglink.seglink_to_bbox(seg_scores = seg_label_data,
link_scores = link_label_data,
seg_offsets_pred = seg_offsets_data)
# draw bboxes on the image
for bbox_idx in xrange(len(bboxes)):
bbox = bboxes[bbox_idx, :]
draw_bbox(image_data, bbox)
image_path = util.io.join_path(dump_path, '%d_%d.jpg'%(batch_idx, image_idx))
util.plt.imwrite(image_path, image_data)
print 'Make sure that the text on the image are correctly bounded\
with oriented boxes:', image_path
batch_idx += 1
开发者ID:dengdan,项目名称:seglink,代码行数:41,
示例27: __len__
点赞 5
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def __len__(self):
return int(np.ceil(len(self.samples) / float(batch_size)))
开发者ID:foamliu,项目名称:FaceNet,代码行数:4,
示例28: __getitem__
点赞 5
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def __getitem__(self, idx):
i = idx * batch_size
length = min(batch_size, (len(self.samples) - i))
batch_inputs = np.empty((3, length, img_size, img_size, channel), dtype=np.float32)
batch_dummy_target = np.zeros((length, embedding_size * 3), dtype=np.float32)
for i_batch in range(length):
sample = self.samples[i + i_batch]
for j, role in enumerate(['a', 'p', 'n']):
image_name = sample[role]
filename = os.path.join(self.image_folder, image_name)
image = cv.imread(filename) # BGR
image = image[:, :, ::-1] # RGB
dets = self.detector(image, 1)
num_faces = len(dets)
if num_faces > 0:
# Find the 5 face landmarks we need to do the alignment.
faces = dlib.full_object_detections()
for detection in dets:
faces.append(self.sp(image, detection))
image = dlib.get_face_chip(image, faces[0], size=img_size)
else:
image = cv.resize(image, (img_size, img_size), cv.INTER_CUBIC)
if self.usage == 'train':
image = aug_pipe.augment_image(image)
batch_inputs[j, i_batch] = preprocess_input(image)
return [batch_inputs[0], batch_inputs[1], batch_inputs[2]], batch_dummy_target
开发者ID:foamliu,项目名称:FaceNet,代码行数:34,
示例29: parse_args
点赞 5
# 需要导入模块: import config [as 别名]
# 或者: from config import batch_size [as 别名]
def parse_args():
# Parse input arguments
desc = "Implementation for AugMix paper"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument('--batch_size',
type=int,
required=False)
parser.add_argument('--epochs',
help='number of training epochs',
type=int,
required=True)
parser.add_argument('--max_lr',
help='maxium learning rate for lr scheduler',
default=1.0,
type=float)
parser.add_argument('--min_lr',
help='minimum learning rate for lr scheduler',
default=1e-5,
type=float)
parser.add_argument('--img_size',
help='size of the images',
default=32,
type=int)
parser.add_argument("--save_dir_path",
type=str,
help="dir path to save output results",
default="",
required=False)
parser.add_argument("--plot_name",
type=str,
help="filename for the plots",
default="history.png",
required=False)
args = vars(parser.parse_args())
return args
###########################################################################
开发者ID:AakashKumarNain,项目名称:AugMix_TF2,代码行数:39,
注:本文中的config.batch_size方法示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。
python batch_size_Python config.batch_size方法代码示例相关推荐
- python里config_Python config.get_config方法代码示例
本文整理汇总了Python中config.get_config方法的典型用法代码示例.如果您正苦于以下问题:Python config.get_config方法的具体用法?Python config. ...
- python dropout_Python slim.dropout方法代码示例
本文整理汇总了Python中tensorflow.contrib.slim.dropout方法的典型用法代码示例.如果您正苦于以下问题:Python slim.dropout方法的具体用法?Pytho ...
- python data_Python data.Data方法代码示例
本文整理汇总了Python中data.Data方法的典型用法代码示例.如果您正苦于以下问题:Python data.Data方法的具体用法?Python data.Data怎么用?Python dat ...
- python makedirs_Python gfile.MakeDirs方法代码示例
本文整理汇总了Python中tensorflow.gfile.MakeDirs方法的典型用法代码示例.如果您正苦于以下问题:Python gfile.MakeDirs方法的具体用法?Python gf ...
- python fonttool_Python wx.Font方法代码示例
本文整理汇总了Python中wx.Font方法的典型用法代码示例.如果您正苦于以下问题:Python wx.Font方法的具体用法?Python wx.Font怎么用?Python wx.Font使用 ...
- python nextpow2_Python signal.hann方法代码示例
本文整理汇总了Python中scipy.signal.hann方法的典型用法代码示例.如果您正苦于以下问题:Python signal.hann方法的具体用法?Python signal.hann怎么 ...
- python renamer_Python sys.meta_path方法代码示例
本文整理汇总了Python中sys.meta_path方法的典型用法代码示例.如果您正苦于以下问题:Python sys.meta_path方法的具体用法?Python sys.meta_path怎么 ...
- python info_Python logger.info方法代码示例
本文整理汇总了Python中utils.logger.info方法的典型用法代码示例.如果您正苦于以下问题:Python logger.info方法的具体用法?Python logger.info怎么 ...
- python dateformatter_Python dates.DateFormatter方法代码示例
本文整理汇总了Python中matplotlib.dates.DateFormatter方法的典型用法代码示例.如果您正苦于以下问题:Python dates.DateFormatter方法的具体用法 ...
最新文章
- 某程序员吐槽:提离职后领导开始演戏,假装不知道我工资低,对我进行挽留,怎么办?...
- ceph的数据存储之路(6) -----pg的创建
- 安卓流行布局开源库_如何使用流行度在开源库之间进行选择
- [html] img中的src加载失败时如何用默认图片来替换呢?
- 前端学习(235):css HACK
- 解决微信小程序新建项目没有样式问题,以及官方demo
- python hackrf_HackRF固件更新及编译环境搭建
- python基础语法实验要求_Python基础语法-关于条件
- XSS漏洞原理/方式/防御
- sdh管理单元指针_「干货三」SDH技术重点知识分布(附小技巧)
- CS229 Lecture Note 1(监督学习、线性回归)
- Idea翻译插件google翻译失败超时
- 深度学习入门:基于Python的理论与实现——第一章Python入门
- 桥本木分式(使用回溯法求解)
- C语言公交车线路信息查询系统
- java算法int型整数反转的另类解法
- 渗透前期学习资源分享
- 西瓜决策树-sklearn实现
- java模拟抛硬币的结果
- 【CSS3】设置背景样式
热门文章
- 关于session校验在项目中的使用
- 随机森林分类算法python代码_随机森林的原理及Python代码实现
- Win11高效日历推荐
- Docker容器commit安装kali工具集
- android之weex之component插件开发
- Springboot之GetMapping参数
- VALSE学习(十一):弱监督图像语义分割
- centos 环境变量_Centos7:Linux环境变量配置文件
- opencv安装教程python3.7_Python3.7中安装openCV库的方法
- js 判断支持webgl_「WebGL基础」:第一部分