我有一些代码来调整图像的大小,这样我就可以得到图像中心的缩放块-我用这个来获取一个UIImage,并返回一个小的,正方形的图像表示,类似于在照片应用程序的相册视图中看到的。(我知道我可以使用UIImageView和调整裁剪模式来实现相同的结果,但这些图像有时显示在UIWebViews中)。
我已经开始注意到这段代码中的一些崩溃,我有点难住了。我有两种不同的理论,不知道哪一种是正确的。
理论1)我通过绘制到目标尺寸的屏幕外图像上下文来实现裁剪。因为我想要图像的中心部分,所以我将传递给drawwinrect的CGRect参数设置为比图像上下文的边界更大的值。我希望这是符合规定的,但我是不是在试图掩盖其他我不应该触及的记忆?
理论2)我在后台线程中做所有这些。我知道UIKit的某些部分被限制在主线程中。我假设/希望绘制到屏幕外的视图不是其中之一。我错了吗?
(哦,我真怀念NSImage的drawwinrect:fromRect:operation:fraction:方法。)
这是我的UIImage作物实现,它遵从imageOrientation属性。所有方向都经过了彻底的测试。
inline double rad(double deg)
{
return deg / 180.0 * M_PI;
}
UIImage* UIImageCrop(UIImage* img, CGRect rect)
{
CGAffineTransform rectTransform;
switch (img.imageOrientation)
{
case UIImageOrientationLeft:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -img.size.height);
break;
case UIImageOrientationRight:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -img.size.width, 0);
break;
case UIImageOrientationDown:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -img.size.width, -img.size.height);
break;
default:
rectTransform = CGAffineTransformIdentity;
};
rectTransform = CGAffineTransformScale(rectTransform, img.scale, img.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], CGRectApplyAffineTransform(rect, rectTransform));
UIImage *result = [UIImage imageWithCGImage:imageRef scale:img.scale orientation:img.imageOrientation];
CGImageRelease(imageRef);
return result;
}
我不满意的其他解决方案,因为他们要么画了几次(使用更多的权力比必要的)或有问题的方向。下面是我从UIImage *图像中使用的缩放正方形croppedImage。
CGFloat minimumSide = fminf(image.size.width, image.size.height);
CGFloat finalSquareSize = 600.;
//create new drawing context for right size
CGRect rect = CGRectMake(0, 0, finalSquareSize, finalSquareSize);
CGFloat scalingRatio = 640.0/minimumSide;
UIGraphicsBeginImageContext(rect.size);
//draw
[image drawInRect:CGRectMake((minimumSide - photo.size.width)*scalingRatio/2., (minimumSide - photo.size.height)*scalingRatio/2., photo.size.width*scalingRatio, photo.size.height*scalingRatio)];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
你可以创建一个UIImage类别,并在任何你需要的地方使用它。基于HitScans的响应和咆哮的评论。
@implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)rect {
rect = CGRectMake(rect.origin.x*self.scale,
rect.origin.y*self.scale,
rect.size.width*self.scale,
rect.size.height*self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
@end
你可以这样用:
UIImage *imageToCrop = <yourImageToCrop>;
CGRect cropRect = <areaYouWantToCrop>;
//for example
//CGRectMake(0, 40, 320, 100);
UIImage *croppedImage = [imageToCrop crop:cropRect];
wolf回答的快速版本,对我来说很管用:
public extension UIImage {
func croppedImage(inRect rect: CGRect) -> UIImage {
let rad: (Double) -> CGFloat = { deg in
return CGFloat(deg / 180.0 * .pi)
}
var rectTransform: CGAffineTransform
switch imageOrientation {
case .left:
let rotation = CGAffineTransform(rotationAngle: rad(90))
rectTransform = rotation.translatedBy(x: 0, y: -size.height)
case .right:
let rotation = CGAffineTransform(rotationAngle: rad(-90))
rectTransform = rotation.translatedBy(x: -size.width, y: 0)
case .down:
let rotation = CGAffineTransform(rotationAngle: rad(-180))
rectTransform = rotation.translatedBy(x: -size.width, y: -size.height)
default:
rectTransform = .identity
}
rectTransform = rectTransform.scaledBy(x: scale, y: scale)
let transformedRect = rect.applying(rectTransform)
let imageRef = cgImage!.cropping(to: transformedRect)!
let result = UIImage(cgImage: imageRef, scale: scale, orientation: imageOrientation)
return result
}
}
(UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}