在我的iPhone应用程序中,我用相机拍了一张照片,然后我想把它调整为290*390像素。我用这个方法来调整图像的大小:
UIImage *newImage = [image _imageScaledToSize:CGSizeMake(290, 390)
interpolationQuality:1];
它工作得很好,但它是一个没有记录的功能,所以我不能再在iPhone OS4上使用它了。
所以…调整UIImage大小最简单的方法是什么?
使用这个扩展
extension UIImage {
public func resize(size:CGSize, completionHandler:(resizedImage:UIImage, data:NSData?)->()) {
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0), { () -> Void in
let newSize:CGSize = size
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImageJPEGRepresentation(newImage, 0.5)
dispatch_async(dispatch_get_main_queue(), { () -> Void in
completionHandler(resizedImage: newImage, data:imageData)
})
})
}
}
我发现在你的Swift 3项目中很难找到一个开箱即用的答案。其他答案的主要问题是他们不尊重图像的alpha通道。这是我在我的项目中使用的技巧。
extension UIImage {
func scaledToFit(toSize newSize: CGSize) -> UIImage {
if (size.width < newSize.width && size.height < newSize.height) {
return copy() as! UIImage
}
let widthScale = newSize.width / size.width
let heightScale = newSize.height / size.height
let scaleFactor = widthScale < heightScale ? widthScale : heightScale
let scaledSize = CGSize(width: size.width * scaleFactor, height: size.height * scaleFactor)
return self.scaled(toSize: scaledSize, in: CGRect(x: 0.0, y: 0.0, width: scaledSize.width, height: scaledSize.height))
}
func scaled(toSize newSize: CGSize, in rect: CGRect) -> UIImage {
if UIScreen.main.scale == 2.0 {
UIGraphicsBeginImageContextWithOptions(newSize, !hasAlphaChannel, 2.0)
}
else {
UIGraphicsBeginImageContext(newSize)
}
draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage ?? UIImage()
}
var hasAlphaChannel: Bool {
guard let alpha = cgImage?.alphaInfo else {
return false
}
return alpha == CGImageAlphaInfo.first ||
alpha == CGImageAlphaInfo.last ||
alpha == CGImageAlphaInfo.premultipliedFirst ||
alpha == CGImageAlphaInfo.premultipliedLast
}
}
用法示例:
override func viewDidLoad() {
super.viewDidLoad()
let size = CGSize(width: 14.0, height: 14.0)
if let image = UIImage(named: "barbell")?.scaledToFit(toSize: size) {
let imageView = UIImageView(image: image)
imageView.center = CGPoint(x: 100, y: 100)
view.addSubview(imageView)
}
}
这段代码重写了Apple的扩展,增加了对有和没有alpha通道的图像的支持。
作为进一步的阅读,我建议查看这篇文章,了解不同的图像调整技术。目前的方法提供了不错的性能,它操作高级api,易于理解。我建议坚持使用它,除非您发现图像大小调整是性能的瓶颈。
对Paul代码的改进将使你在带有视网膜显示屏的iPhone上获得清晰的高分辨率图像。否则按比例缩小时就会模糊。
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
if ([[UIScreen mainScreen] scale] == 2.0) {
UIGraphicsBeginImageContextWithOptions(newSize, YES, 2.0);
} else {
UIGraphicsBeginImageContext(newSize);
}
} else {
UIGraphicsBeginImageContext(newSize);
}
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
正确的Swift 3.0 iOS 10+解决方案:使用ImageRenderer和闭包语法:
func imageWith(newSize: CGSize) -> UIImage {
let image = UIGraphicsImageRenderer(size: newSize).image { _ in
draw(in: CGRect(origin: .zero, size: newSize))
}
return image.withRenderingMode(renderingMode)
}
这是Objective-C版本:
@implementation UIImage (ResizeCategory)
- (UIImage *)imageWithSize:(CGSize)newSize
{
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:newSize];
UIImage *image = [renderer imageWithActions:^(UIGraphicsImageRendererContext*_Nonnull myContext) {
[self drawInRect:(CGRect) {.origin = CGPointZero, .size = newSize}];
}];
return [image imageWithRenderingMode:self.renderingMode];
}
@end