我有一些代码来调整图像的大小,这样我就可以得到图像中心的缩放块-我用这个来获取一个UIImage,并返回一个小的,正方形的图像表示,类似于在照片应用程序的相册视图中看到的。(我知道我可以使用UIImageView和调整裁剪模式来实现相同的结果,但这些图像有时显示在UIWebViews中)。

我已经开始注意到这段代码中的一些崩溃,我有点难住了。我有两种不同的理论,不知道哪一种是正确的。

理论1)我通过绘制到目标尺寸的屏幕外图像上下文来实现裁剪。因为我想要图像的中心部分,所以我将传递给drawwinrect的CGRect参数设置为比图像上下文的边界更大的值。我希望这是符合规定的,但我是不是在试图掩盖其他我不应该触及的记忆?

理论2)我在后台线程中做所有这些。我知道UIKit的某些部分被限制在主线程中。我假设/希望绘制到屏幕外的视图不是其中之一。我错了吗?

(哦,我真怀念NSImage的drawwinrect:fromRect:operation:fraction:方法。)


当前回答

- (UIImage *)getSubImage:(CGRect) rect{
    CGImageRef subImageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
    CGRect smallBounds = CGRectMake(rect.origin.x, rect.origin.y, CGImageGetWidth(subImageRef), CGImageGetHeight(subImageRef));

    UIGraphicsBeginImageContext(smallBounds.size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextDrawImage(context, smallBounds, subImageRef);
    UIImage* smallImg = [UIImage imageWithCGImage:subImageRef];
    UIGraphicsEndImageContext();

    return smallImg;
}

其他回答

下面的代码片段可能会有所帮助。

import UIKit

extension UIImage {
    func cropImage(toRect rect: CGRect) -> UIImage? {
        if let imageRef = self.cgImage?.cropping(to: rect) {
            return UIImage(cgImage: imageRef)
        }
        return nil
    }
}

这是我的UIImage作物实现,它遵从imageOrientation属性。所有方向都经过了彻底的测试。

inline double rad(double deg)
{
    return deg / 180.0 * M_PI;
}

UIImage* UIImageCrop(UIImage* img, CGRect rect)
{
    CGAffineTransform rectTransform;
    switch (img.imageOrientation)
    {
        case UIImageOrientationLeft:
            rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -img.size.height);
            break;
        case UIImageOrientationRight:
            rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -img.size.width, 0);
            break;
        case UIImageOrientationDown:
            rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -img.size.width, -img.size.height);
            break;
        default:
            rectTransform = CGAffineTransformIdentity;
    };
    rectTransform = CGAffineTransformScale(rectTransform, img.scale, img.scale);

    CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], CGRectApplyAffineTransform(rect, rectTransform));
    UIImage *result = [UIImage imageWithCGImage:imageRef scale:img.scale orientation:img.imageOrientation];
    CGImageRelease(imageRef);
    return result;
}
- (UIImage *)getSubImage:(CGRect) rect{
    CGImageRef subImageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
    CGRect smallBounds = CGRectMake(rect.origin.x, rect.origin.y, CGImageGetWidth(subImageRef), CGImageGetHeight(subImageRef));

    UIGraphicsBeginImageContext(smallBounds.size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextDrawImage(context, smallBounds, subImageRef);
    UIImage* smallImg = [UIImage imageWithCGImage:subImageRef];
    UIGraphicsEndImageContext();

    return smallImg;
}

Swift 3版本

func cropImage(imageToCrop:UIImage, toRect rect:CGRect) -> UIImage{
    
    let imageRef:CGImage = imageToCrop.cgImage!.cropping(to: rect)!
    let cropped:UIImage = UIImage(cgImage:imageRef)
    return cropped
}


let imageTop:UIImage  = UIImage(named:"one.jpg")! // add validation

在这个桥接函数CGRectMake -> CGRect的帮助下(这个答案由@rob mayoff回答):

 func CGRectMake(_ x: CGFloat, _ y: CGFloat, _ width: CGFloat, _ height: CGFloat) -> CGRect {
    return CGRect(x: x, y: y, width: width, height: height)
}

用法是:

if var image:UIImage  = UIImage(named:"one.jpg"){
   let  croppedImage = cropImage(imageToCrop: image, toRect: CGRectMake(
        image.size.width/4,
        0,
        image.size.width/2,
        image.size.height)
    )
}

输出:

 (UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
    double ratio;
    double delta;
    CGPoint offset;

    //make a new square size, that is the resized imaged width
    CGSize sz = CGSizeMake(newSize.width, newSize.width);

    //figure out if the picture is landscape or portrait, then
    //calculate scale factor and offset
    if (image.size.width > image.size.height) {
        ratio = newSize.width / image.size.width;
        delta = (ratio*image.size.width - ratio*image.size.height);
        offset = CGPointMake(delta/2, 0);
    } else {
        ratio = newSize.width / image.size.height;
        delta = (ratio*image.size.height - ratio*image.size.width);
        offset = CGPointMake(0, delta/2);
    }

    //make the final clipping rect based on the calculated values
    CGRect clipRect = CGRectMake(-offset.x, -offset.y,
                                 (ratio * image.size.width) + delta,
                                 (ratio * image.size.height) + delta);


    //start a new context, with scale factor 0.0 so retina displays get
    //high quality image
    if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) {
        UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
    } else {
        UIGraphicsBeginImageContext(sz);
    }
    UIRectClip(clipRect);
    [image drawInRect:clipRect];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newImage;
}