日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程语言 > python >内容正文

python

python取绝对值fab_Python transforms.CenterCrop方法代碼示例

發(fā)布時間:2023/12/9 python 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python取绝对值fab_Python transforms.CenterCrop方法代碼示例 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

本文整理匯總了Python中torchvision.transforms.CenterCrop方法的典型用法代碼示例。如果您正苦於以下問題:Python transforms.CenterCrop方法的具體用法?Python transforms.CenterCrop怎麼用?Python transforms.CenterCrop使用的例子?那麼恭喜您, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進(jìn)一步了解該方法所在模塊torchvision.transforms的用法示例。

在下文中一共展示了transforms.CenterCrop方法的30個代碼示例,這些例子默認(rèn)根據(jù)受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點(diǎn)讚,您的評價將有助於我們的係統(tǒng)推薦出更棒的Python代碼示例。

示例1: _get_ds_val

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def _get_ds_val(self, images_spec, crop=False, truncate=False):

img_to_tensor_t = [images_loader.IndexImagesDataset.to_tensor_uint8_transform()]

if crop:

img_to_tensor_t.insert(0, transforms.CenterCrop(crop))

img_to_tensor_t = transforms.Compose(img_to_tensor_t)

fixed_first = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'fixedimg.jpg')

if not os.path.isfile(fixed_first):

print(f'INFO: No file found at {fixed_first}')

fixed_first = None

ds = images_loader.IndexImagesDataset(

images=images_loader.ImagesCached(

images_spec, self.config_dl.image_cache_pkl,

min_size=self.config_dl.val_glob_min_size),

to_tensor_transform=img_to_tensor_t,

fixed_first=fixed_first) # fix a first image to have consistency in tensor board

if truncate:

ds = pe.TruncatedDataset(ds, num_elemens=truncate)

return ds

開發(fā)者ID:fab-jul,項(xiàng)目名稱:L3C-PyTorch,代碼行數(shù):24,

示例2: get_lsun_dataloader

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def get_lsun_dataloader(path_to_data='../lsun', dataset='bedroom_train',

batch_size=64):

"""LSUN dataloader with (128, 128) sized images.

path_to_data : str

One of 'bedroom_val' or 'bedroom_train'

"""

# Compose transforms

transform = transforms.Compose([

transforms.Resize(128),

transforms.CenterCrop(128),

transforms.ToTensor()

])

# Get dataset

lsun_dset = datasets.LSUN(db_path=path_to_data, classes=[dataset],

transform=transform)

# Create dataloader

return DataLoader(lsun_dset, batch_size=batch_size, shuffle=True)

開發(fā)者ID:vandit15,項(xiàng)目名稱:Self-Supervised-Gans-Pytorch,代碼行數(shù):21,

示例3: save_distorted

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def save_distorted(method=gaussian_noise):

for severity in range(1, 6):

print(method.__name__, severity)

distorted_dataset = DistortImageFolder(

root="/share/data/vision-greg/ImageNet/clsloc/images/val",

method=method, severity=severity,

transform=trn.Compose([trn.Resize(256), trn.CenterCrop(224)]))

distorted_dataset_loader = torch.utils.data.DataLoader(

distorted_dataset, batch_size=100, shuffle=False, num_workers=4)

for _ in distorted_dataset_loader: continue

# /// End Further Setup ///

# /// Display Results ///

開發(fā)者ID:hendrycks,項(xiàng)目名稱:robustness,代碼行數(shù):19,

示例4: transform

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def transform(is_train=True, normalize=True):

"""

Returns a transform object

"""

filters = []

filters.append(Scale(256))

if is_train:

filters.append(RandomCrop(224))

else:

filters.append(CenterCrop(224))

if is_train:

filters.append(RandomHorizontalFlip())

filters.append(ToTensor())

if normalize:

filters.append(Normalize(mean=[0.485, 0.456, 0.406],

std=[0.229, 0.224, 0.225]))

return Compose(filters)

開發(fā)者ID:uwnlp,項(xiàng)目名稱:verb-attributes,代碼行數(shù):22,

示例5: __init__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(

self,

resize: int = ImagenetConstants.RESIZE,

crop_size: int = ImagenetConstants.CROP_SIZE,

mean: List[float] = ImagenetConstants.MEAN,

std: List[float] = ImagenetConstants.STD,

):

"""The constructor method of ImagenetNoAugmentTransform class.

Args:

resize: expected image size per dimension after resizing

crop_size: expected size for a dimension of central cropping

mean: a 3-tuple denoting the pixel RGB mean

std: a 3-tuple denoting the pixel RGB standard deviation

"""

self.transform = transforms.Compose(

[

transforms.Resize(resize),

transforms.CenterCrop(crop_size),

transforms.ToTensor(),

transforms.Normalize(mean=mean, std=std),

]

)

開發(fā)者ID:facebookresearch,項(xiàng)目名稱:ClassyVision,代碼行數(shù):26,

示例6: make

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def make(sz_resize = 256, sz_crop = 227, mean = [104, 117, 128],

std = [1, 1, 1], rgb_to_bgr = True, is_train = True,

intensity_scale = None):

return transforms.Compose([

RGBToBGR() if rgb_to_bgr else Identity(),

transforms.RandomResizedCrop(sz_crop) if is_train else Identity(),

transforms.Resize(sz_resize) if not is_train else Identity(),

transforms.CenterCrop(sz_crop) if not is_train else Identity(),

transforms.RandomHorizontalFlip() if is_train else Identity(),

transforms.ToTensor(),

ScaleIntensities(

*intensity_scale) if intensity_scale is not None else Identity(),

transforms.Normalize(

mean=mean,

std=std,

)

])

開發(fā)者ID:CompVis,項(xiàng)目名稱:metric-learning-divide-and-conquer,代碼行數(shù):19,

示例7: test_on_validation_set

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def test_on_validation_set(model, validation_set=None):

if validation_set is None:

validation_set = get_validation_set()

total_ssim = 0

total_psnr = 0

iters = len(validation_set.tuples)

crop = CenterCrop(config.CROP_SIZE)

for i, tup in enumerate(validation_set.tuples):

x1, gt, x2, = [crop(load_img(p)) for p in tup]

pred = interpolate(model, x1, x2)

gt = pil_to_tensor(gt)

pred = pil_to_tensor(pred)

total_ssim += ssim(pred, gt).item()

total_psnr += psnr(pred, gt).item()

print(f'#{i+1} done')

avg_ssim = total_ssim / iters

avg_psnr = total_psnr / iters

print(f'avg_ssim: {avg_ssim}, avg_psnr: {avg_psnr}')

開發(fā)者ID:martkartasev,項(xiàng)目名稱:sepconv,代碼行數(shù):26,

示例8: test_linear_interp

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def test_linear_interp(validation_set=None):

if validation_set is None:

validation_set = get_validation_set()

total_ssim = 0

total_psnr = 0

iters = len(validation_set.tuples)

crop = CenterCrop(config.CROP_SIZE)

for tup in validation_set.tuples:

x1, gt, x2, = [pil_to_tensor(crop(load_img(p))) for p in tup]

pred = torch.mean(torch.stack((x1, x2), dim=0), dim=0)

total_ssim += ssim(pred, gt).item()

total_psnr += psnr(pred, gt).item()

avg_ssim = total_ssim / iters

avg_psnr = total_psnr / iters

print(f'avg_ssim: {avg_ssim}, avg_psnr: {avg_psnr}')

開發(fā)者ID:martkartasev,項(xiàng)目名稱:sepconv,代碼行數(shù):23,

示例9: __init__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(self, patches, use_cache, augment_data):

super(PatchDataset, self).__init__()

self.patches = patches

self.crop = CenterCrop(config.CROP_SIZE)

if augment_data:

self.random_transforms = [RandomRotation((90, 90)), RandomVerticalFlip(1.0), RandomHorizontalFlip(1.0),

(lambda x: x)]

self.get_aug_transform = (lambda: random.sample(self.random_transforms, 1)[0])

else:

# Transform does nothing. Not sure if horrible or very elegant...

self.get_aug_transform = (lambda: (lambda x: x))

if use_cache:

self.load_patch = data_manager.load_cached_patch

else:

self.load_patch = data_manager.load_patch

print('Dataset ready with {} tuples.'.format(len(patches)))

開發(fā)者ID:martkartasev,項(xiàng)目名稱:sepconv,代碼行數(shù):21,

示例10: preprocess

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def preprocess(self):

if self.train:

return transforms.Compose([

transforms.RandomResizedCrop(self.image_size),

transforms.RandomHorizontalFlip(),

transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.2),

transforms.ToTensor(),

transforms.Normalize(self.mean, self.std),

])

else:

return transforms.Compose([

transforms.Resize((int(self.image_size / 0.875), int(self.image_size / 0.875))),

transforms.CenterCrop(self.image_size),

transforms.ToTensor(),

transforms.Normalize(self.mean, self.std),

])

開發(fā)者ID:wandering007,項(xiàng)目名稱:nasnet-pytorch,代碼行數(shù):18,

示例11: __getitem__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __getitem__(self, index):

# get downscaled, cropped and gt (if available) image

hr_image = Image.open(self.hr_files[index])

w, h = hr_image.size

cs = utils.calculate_valid_crop_size(min(w, h), self.upscale_factor)

if self.crop_size is not None:

cs = min(cs, self.crop_size)

cropped_image = TF.to_tensor(T.CenterCrop(cs // self.upscale_factor)(hr_image))

hr_image = T.CenterCrop(cs)(hr_image)

hr_image = TF.to_tensor(hr_image)

resized_image = utils.imresize(hr_image, 1.0 / self.upscale_factor, True)

if self.lr_files is None:

return resized_image, cropped_image, resized_image

else:

lr_image = Image.open(self.lr_files[index])

lr_image = TF.to_tensor(T.CenterCrop(cs // self.upscale_factor)(lr_image))

return resized_image, cropped_image, lr_image

開發(fā)者ID:ManuelFritsche,項(xiàng)目名稱:real-world-sr,代碼行數(shù):19,

示例12: __init__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(self, options):

transform_list = []

if options.image_size is not None:

transform_list.append(transforms.Resize((options.image_size, options.image_size)))

# transform_list.append(transforms.CenterCrop(options.image_size))

transform_list.append(transforms.ToTensor())

if options.image_colors == 1:

transform_list.append(transforms.Normalize(mean=[0.5], std=[0.5]))

elif options.image_colors == 3:

transform_list.append(transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]))

transform = transforms.Compose(transform_list)

dataset = ImagePairs(options.data_dir, split=options.split, transform=transform)

self.dataloader = DataLoader(

dataset,

batch_size=options.batch_size,

num_workers=options.loader_workers,

shuffle=True,

drop_last=True,

pin_memory=options.pin_memory

)

self.iterator = iter(self.dataloader)

開發(fā)者ID:unicredit,項(xiàng)目名稱:ganzo,代碼行數(shù):25,

示例13: __init__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(self, path, classes, stage='train'):

self.data = []

for i, c in enumerate(classes):

cls_path = osp.join(path, c)

images = os.listdir(cls_path)

for image in images:

self.data.append((osp.join(cls_path, image), i))

normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],

std=[0.229, 0.224, 0.225])

if stage == 'train':

self.transforms = transforms.Compose([transforms.RandomResizedCrop(224),

transforms.RandomHorizontalFlip(),

transforms.ToTensor(),

normalize])

if stage == 'test':

self.transforms = transforms.Compose([transforms.Resize(256),

transforms.CenterCrop(224),

transforms.ToTensor(),

normalize])

開發(fā)者ID:cyvius96,項(xiàng)目名稱:DGP,代碼行數(shù):23,

示例14: __init__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(self, opt):

self.image_path = opt.dataroot

self.is_train = opt.is_train

self.d_num = opt.n_attribute

print ('Start preprocessing dataset..!')

random.seed(1234)

self.preprocess()

print ('Finished preprocessing dataset..!')

if self.is_train:

trs = [transforms.Resize(opt.load_size, interpolation=Image.ANTIALIAS), transforms.RandomCrop(opt.fine_size)]

else:

trs = [transforms.Resize(opt.load_size, interpolation=Image.ANTIALIAS), transforms.CenterCrop(opt.fine_size)]

if opt.is_flip:

trs.append(transforms.RandomHorizontalFlip())

self.transform = transforms.Compose(trs)

self.norm = transforms.Compose([

transforms.ToTensor(),

transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

self.num_data = max(self.num)

開發(fā)者ID:Xiaoming-Yu,項(xiàng)目名稱:DMIT,代碼行數(shù):21,

示例15: __init__

?點(diǎn)讚 6

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(self, opt):

'''Initialize this dataset class.

We need to specific the path of the dataset and the domain label of each image.

'''

self.image_list = []

self.label_list = []

if opt.is_train:

trs = [transforms.Resize(opt.load_size, interpolation=Image.ANTIALIAS), transforms.RandomCrop(opt.fine_size)]

else:

trs = [transforms.Resize(opt.load_size, interpolation=Image.ANTIALIAS), transforms.CenterCrop(opt.fine_size)]

if opt.is_flip:

trs.append(transforms.RandomHorizontalFlip())

trs.append(transforms.ToTensor())

trs.append(transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)))

self.transform = transforms.Compose(trs)

self.num_data = len(self.image_list)

開發(fā)者ID:Xiaoming-Yu,項(xiàng)目名稱:DMIT,代碼行數(shù):18,

示例16: get_transform

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def get_transform(data_name, split_name, opt):

normalizer = transforms.Normalize(mean=[0.485, 0.456, 0.406],

std=[0.229, 0.224, 0.225])

t_list = []

if split_name == 'train':

t_list = [transforms.RandomSizedCrop(opt.crop_size),

transforms.RandomHorizontalFlip()]

elif split_name == 'val':

t_list = [transforms.Scale(256), transforms.CenterCrop(224)]

elif split_name == 'test':

t_list = [transforms.Scale(256), transforms.CenterCrop(224)]

t_end = [transforms.ToTensor(), normalizer]

transform = transforms.Compose(t_list + t_end)

return transform

開發(fā)者ID:ExplorerFreda,項(xiàng)目名稱:VSE-C,代碼行數(shù):17,

示例17: test_loader

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def test_loader(path, batch_size=16, num_workers=1, pin_memory=True):

normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

return data.DataLoader(

datasets.ImageFolder(path,

transforms.Compose([

transforms.Resize(256),

transforms.CenterCrop(224),

transforms.ToTensor(),

normalize,

])),

batch_size=batch_size,

shuffle=False,

num_workers=num_workers,

pin_memory=pin_memory)

開發(fā)者ID:jindongwang,項(xiàng)目名稱:transferlearning,代碼行數(shù):16,

示例18: get_test_dataset

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def get_test_dataset(self, testset):

to_tensor_transform = [IndexImagesDataset.to_tensor_uint8_transform()]

if self.flags.crop:

print('*** WARN: Cropping to {}'.format(self.flags.crop))

to_tensor_transform.insert(0, transforms.CenterCrop(self.flags.crop))

return IndexImagesDataset(

testset,

to_tensor_transform=transforms.Compose(to_tensor_transform))

開發(fā)者ID:fab-jul,項(xiàng)目名稱:L3C-PyTorch,代碼行數(shù):10,

示例19: check_dataset

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def check_dataset(opt):

normalize_transform = transforms.Compose([transforms.ToTensor(),

transforms.Normalize((0.485, 0.456, 0.406),

(0.229, 0.224, 0.225))])

train_large_transform = transforms.Compose([transforms.RandomResizedCrop(224),

transforms.RandomHorizontalFlip()])

val_large_transform = transforms.Compose([transforms.Resize(256),

transforms.CenterCrop(224)])

train_small_transform = transforms.Compose([transforms.Pad(4),

transforms.RandomCrop(32),

transforms.RandomHorizontalFlip()])

splits = check_split(opt)

if opt.dataset in ['cub200', 'indoor', 'stanford40', 'dog']:

train, val = 'train', 'test'

train_transform = transforms.Compose([train_large_transform, normalize_transform])

val_transform = transforms.Compose([val_large_transform, normalize_transform])

sets = [dset.ImageFolder(root=os.path.join(opt.dataroot, train), transform=train_transform),

dset.ImageFolder(root=os.path.join(opt.dataroot, train), transform=val_transform),

dset.ImageFolder(root=os.path.join(opt.dataroot, val), transform=val_transform)]

sets = [FolderSubset(dataset, *split) for dataset, split in zip(sets, splits)]

opt.num_classes = len(splits[0][0])

else:

raise Exception('Unknown dataset')

loaders = [torch.utils.data.DataLoader(dataset,

batch_size=opt.batchSize,

shuffle=True,

num_workers=0) for dataset in sets]

return loaders

開發(fā)者ID:alinlab,項(xiàng)目名稱:L2T-ww,代碼行數(shù):35,

示例20: get_dataset_loaders

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def get_dataset_loaders(model, dataset, workers):

target_size = (model["common"]["image_size"],) * 2

batch_size = model["common"]["batch_size"]

path = dataset["common"]["dataset"]

mean, std = [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]

transform = JointCompose(

[

JointTransform(ConvertImageMode("RGB"), ConvertImageMode("P")),

JointTransform(Resize(target_size, Image.BILINEAR), Resize(target_size, Image.NEAREST)),

JointTransform(CenterCrop(target_size), CenterCrop(target_size)),

JointRandomHorizontalFlip(0.5),

JointRandomRotation(0.5, 90),

JointRandomRotation(0.5, 90),

JointRandomRotation(0.5, 90),

JointTransform(ImageToTensor(), MaskToTensor()),

JointTransform(Normalize(mean=mean, std=std), None),

]

)

train_dataset = SlippyMapTilesConcatenation(

[os.path.join(path, "training", "images")], os.path.join(path, "training", "labels"), transform

)

val_dataset = SlippyMapTilesConcatenation(

[os.path.join(path, "validation", "images")], os.path.join(path, "validation", "labels"), transform

)

assert len(train_dataset) > 0, "at least one tile in training dataset"

assert len(val_dataset) > 0, "at least one tile in validation dataset"

train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=workers)

val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False, drop_last=True, num_workers=workers)

return train_loader, val_loader

開發(fā)者ID:mapbox,項(xiàng)目名稱:robosat,代碼行數(shù):38,

示例21: scale_crop

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def scale_crop(input_size, scale_size=None, normalize=__imagenet_stats):

t_list = [

transforms.CenterCrop(input_size),

transforms.ToTensor(),

transforms.Normalize(**normalize),

]

if scale_size != input_size:

t_list = [transforms.Resize(scale_size)] + t_list

return transforms.Compose(t_list)

開發(fā)者ID:Randl,項(xiàng)目名稱:MobileNetV3-pytorch,代碼行數(shù):12,

示例22: main

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def main():

args = parser.parse_args()

model = ghostnet(num_classes=args.num_classes, width=args.width, dropout=args.dropout)

model.load_state_dict(torch.load('./models/state_dict_93.98.pth'))

if args.num_gpu > 1:

model = torch.nn.DataParallel(model, device_ids=list(range(args.num_gpu))).cuda()

elif args.num_gpu < 1:

model = model

else:

model = model.cuda()

print('GhostNet created.')

valdir = os.path.join(args.data, 'val')

normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],

std=[0.229, 0.224, 0.225])

loader = torch.utils.data.DataLoader(

datasets.ImageFolder(valdir, transforms.Compose([

transforms.Resize(256),

transforms.CenterCrop(224),

transforms.ToTensor(),

normalize,

])),

batch_size=args.batch_size, shuffle=False,

num_workers=args.workers, pin_memory=True)

model.eval()

validate_loss_fn = nn.CrossEntropyLoss().cuda()

eval_metrics = validate(model, loader, validate_loss_fn, args)

print(eval_metrics)

開發(fā)者ID:huawei-noah,項(xiàng)目名稱:ghostnet,代碼行數(shù):34,

示例23: Imagenet_eval

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def Imagenet_eval():

return transforms.Compose([

transforms.Scale(256), # 重新改變大小為size=(w, h) 或 (size, size)

transforms.CenterCrop(224), # 將給定的數(shù)據(jù)進(jìn)行中心切割,得到給定的size。

transforms.ToTensor(), # 轉(zhuǎn)化為tensor數(shù)據(jù)

Normalize_Imagenet(),

])

開發(fā)者ID:wyf2017,項(xiàng)目名稱:DSMnet,代碼行數(shù):9,

示例24: data_loader

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def data_loader(root, batch_size=256, workers=1, pin_memory=True):

traindir = os.path.join(root, 'ILSVRC2012_img_train')

valdir = os.path.join(root, 'ILSVRC2012_img_val')

normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],

std=[0.229, 0.224, 0.225])

train_dataset = datasets.ImageFolder(

traindir,

transforms.Compose([

transforms.RandomResizedCrop(224),

transforms.RandomHorizontalFlip(),

transforms.ToTensor(),

normalize

])

)

val_dataset = datasets.ImageFolder(

valdir,

transforms.Compose([

transforms.Resize(256),

transforms.CenterCrop(224),

transforms.ToTensor(),

normalize

])

)

train_loader = torch.utils.data.DataLoader(

train_dataset,

batch_size=batch_size,

shuffle=True,

num_workers=workers,

pin_memory=pin_memory,

sampler=None

)

val_loader = torch.utils.data.DataLoader(

val_dataset,

batch_size=batch_size,

shuffle=False,

num_workers=workers,

pin_memory=pin_memory

)

return train_loader, val_loader

開發(fā)者ID:jiweibo,項(xiàng)目名稱:ImageNet,代碼行數(shù):43,

示例25: __init__

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def __init__(self, args):

super(BigCIFAR10, self).__init__()

data_root = os.path.join(args.data, "cifar10")

use_cuda = torch.cuda.is_available()

input_size = 128

# Data loading code

kwargs = {"num_workers": args.workers, "pin_memory": True} if use_cuda else {}

train_dataset = torchvision.datasets.CIFAR10(

root=data_root,

train=True,

download=True,

transform=transforms.Compose([

transforms.RandomResizedCrop(input_size),

transforms.RandomHorizontalFlip(),

transforms.ToTensor(),

transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])

]),

)

self.train_loader = torch.utils.data.DataLoader(

train_dataset, batch_size=args.batch_size, shuffle=True, **kwargs

)

test_dataset = torchvision.datasets.CIFAR10(

root=data_root,

train=False,

download=True,

transform=transforms.Compose([

transforms.Resize(input_size),

transforms.CenterCrop(input_size),

transforms.ToTensor(),

transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])

]),

)

self.val_loader = torch.utils.data.DataLoader(

test_dataset, batch_size=args.batch_size, shuffle=False, **kwargs

)

開發(fā)者ID:allenai,項(xiàng)目名稱:hidden-networks,代碼行數(shù):43,

示例26: show_performance

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def show_performance(distortion_name):

errs = []

for severity in range(1, 6):

distorted_dataset = dset.ImageFolder(

root='/share/data/vision-greg/DistortedImageNet/JPEG/' + distortion_name + '/' + str(severity),

transform=trn.Compose([trn.CenterCrop(224), trn.ToTensor(), trn.Normalize(mean, std)]))

distorted_dataset_loader = torch.utils.data.DataLoader(

distorted_dataset, batch_size=args.test_bs, shuffle=False, num_workers=args.prefetch, pin_memory=True)

correct = 0

for batch_idx, (data, target) in enumerate(distorted_dataset_loader):

data = V(data.cuda(), volatile=True)

output = net(data)

pred = output.data.max(1)[1]

correct += pred.eq(target.cuda()).sum()

errs.append(1 - 1.*correct / len(distorted_dataset))

print('\n=Average', tuple(errs))

return np.mean(errs)

# /// End Further Setup ///

# /// Display Results ///

開發(fā)者ID:hendrycks,項(xiàng)目名稱:robustness,代碼行數(shù):32,

示例27: resize_crop_image

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def resize_crop_image(image, new_image_dims):

image_dims = [image.shape[1], image.shape[0]]

if image_dims == new_image_dims:

return image

resize_width = int(math.floor(new_image_dims[1] * float(image_dims[0]) / float(image_dims[1])))

image = transforms.Resize([new_image_dims[1], resize_width], interpolation=Image.NEAREST)(Image.fromarray(image))

image = transforms.CenterCrop([new_image_dims[1], new_image_dims[0]])(image)

image = np.array(image)

return image

開發(fā)者ID:daveredrum,項(xiàng)目名稱:Pointnet2.ScanNet,代碼行數(shù):12,

示例28: resize_crop_image

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def resize_crop_image(image, new_image_dims):

image_dims = [image.shape[1], image.shape[0]]

if image_dims != new_image_dims:

resize_width = int(math.floor(new_image_dims[1] * float(image_dims[0]) / float(image_dims[1])))

image = transforms.Resize([new_image_dims[1], resize_width], interpolation=Image.NEAREST)(Image.fromarray(image))

image = transforms.CenterCrop([new_image_dims[1], new_image_dims[0]])(image)

return np.array(image)

開發(fā)者ID:daveredrum,項(xiàng)目名稱:Pointnet2.ScanNet,代碼行數(shù):10,

示例29: _resize_crop_image

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def _resize_crop_image(self, image, new_image_dims):

image_dims = [image.shape[1], image.shape[0]]

if image_dims != new_image_dims:

resize_width = int(math.floor(new_image_dims[1] * float(image_dims[0]) / float(image_dims[1])))

image = transforms.Resize([new_image_dims[1], resize_width], interpolation=Image.NEAREST)(Image.fromarray(image))

image = transforms.CenterCrop([new_image_dims[1], new_image_dims[0]])(image)

return np.array(image)

開發(fā)者ID:daveredrum,項(xiàng)目名稱:Pointnet2.ScanNet,代碼行數(shù):10,

示例30: display_transform

?點(diǎn)讚 5

?

# 需要導(dǎo)入模塊: from torchvision import transforms [as 別名]

# 或者: from torchvision.transforms import CenterCrop [as 別名]

def display_transform():

return Compose([

ToPILImage(),

Resize(400),

CenterCrop(400),

ToTensor()

])

開發(fā)者ID:amanchadha,項(xiàng)目名稱:iSeeBetter,代碼行數(shù):9,

注:本文中的torchvision.transforms.CenterCrop方法示例整理自Github/MSDocs等源碼及文檔管理平臺,相關(guān)代碼片段篩選自各路編程大神貢獻(xiàn)的開源項(xiàng)目,源碼版權(quán)歸原作者所有,傳播和使用請參考對應(yīng)項(xiàng)目的License;未經(jīng)允許,請勿轉(zhuǎn)載。

總結(jié)

以上是生活随笔為你收集整理的python取绝对值fab_Python transforms.CenterCrop方法代碼示例的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 国产精品久久久久久久9999 | 悟空影视大全免费高清观看在线 | 久久久性色精品国产免费观看 | 欧美日韩精品亚洲精品 | 欧美情侣性视频 | 嫩草免费视频 | 黄色免费在线网址 | 国产在线一区二区三区四区 | 久久久久久一区二区三区 | av无限看 | 精品99在线观看 | 欧美爱爱网站 | 涩涩涩涩涩涩涩涩涩 | 91精品婷婷国产综合久久竹菊 | 日本黄色一级视频 | 成年人免费网 | 国产熟女高潮视频 | 人成在线 | 337p粉嫩大胆噜噜噜噜69影视 | 强伦轩人妻一区二区电影 | 国产21区 | 成年人免费在线观看网站 | 免费色片| 亚洲欧美日韩动漫 | 先锋影音av资源在线 | 欧美在线观看一区二区三区 | 国产精品区在线 | 无套内谢少妇毛片 | 日美韩av| 97视频在线看 | 黄色片特级 | 天天摸日日 | 亚洲一区二区三区免费观看 | 国产乱仑| 一区二区三区黄 | 一曲二曲三曲在线观看中文字幕动漫 | 欧洲精品一区二区三区 | 插入综合网 | 久操影视 | 少妇高潮喷水在线观看 | 在线精品播放 | 男女性网站 | 国产情侣自拍小视频 | 三级视频在线观看 | 久久精品7 | 成人学院中文字幕 | 国产一区二区av在线 | 欧美做受高潮1 | 亚洲 精品 综合 精品 自拍 | 全肉的吸乳文 | 国产一区在线观看免费 | 免费无码国产精品 | 久久久久久久无码 | 国产欧美精品久久 | 亚洲一区二区三区四区av | 黑人巨大精品欧美黑白配亚洲 | 性欧美18一19性猛交 | 亚洲综合图色40p | 日韩欧美视频二区 | 久久九 | 怡红院av亚洲一区二区三区h | 美女羞羞动态图 | 国产精品三区在线观看 | 色偷偷噜噜噜亚洲男人 | 午夜激情婷婷 | 久久艹伊人| 亚洲视频网站在线 | 亚洲aa视频| 亚洲综合色在线 | 欧美黄色免费网站 | 一区二区三区四区欧美 | 在线色亚洲 | 日韩精品免费在线 | 亚洲九九九九 | 精品国产乱码久久久人妻 | 日韩av一二区 | 蜜臀免费av| 少妇厨房愉情理伦bd在线观看 | 97在线超碰| 久久久久一区二区 | 西比尔在线观看完整视频高清 | av毛片在线 | 九九av| 日本人妻一区 | 日韩污视频在线观看 | 国产a∨精品一区二区三区仙踪林 | 亚洲爽爽爽 | 欧美日韩一区在线播放 | 女攻总攻大胸奶汁(高h) | 黑帮大佬和我的365日第二部 | 亚洲天堂免费观看 | 日韩爱爱爱 | 91网址在线 | 日本少妇激三级做爰在线 | ts人妖另类精品视频系列 | 亚洲bb | 一级欧美一级日韩片 | 一本色道久久综合亚洲精品 | 91操视频 |