我的代码在普通设备上工作正常,但在视网膜设备上产生模糊的图像。

有人知道我的问题的解决方案吗?

+ (UIImage *) imageWithView:(UIView *)view
{
    UIGraphicsBeginImageContext(view.bounds.size);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];

    UIImage * img = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    return img;
}

当前回答

对于Swift 5.1,你可以使用这个扩展:

extension UIView {

    func asImage() -> UIImage {
        let renderer = UIGraphicsImageRenderer(bounds: bounds)

        return renderer.image { layer.render(in: $0.cgContext) }
    }
}

其他回答

将此添加到UIView Category的方法

- (UIImage*) capture {
    UIGraphicsBeginImageContext(self.bounds.size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    [self.layer renderInContext:context];
    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return img;
}

斯威夫特2.0:

使用扩展方法:

extension UIImage{

   class func renderUIViewToImage(viewToBeRendered:UIView?) -> UIImage
   {
       UIGraphicsBeginImageContextWithOptions((viewToBeRendered?.bounds.size)!, false, 0.0)
       viewToBeRendered!.drawViewHierarchyInRect(viewToBeRendered!.bounds, afterScreenUpdates: true)
       viewToBeRendered!.layer.renderInContext(UIGraphicsGetCurrentContext()!)

       let finalImage = UIGraphicsGetImageFromCurrentImageContext()
       UIGraphicsEndImageContext()

       return finalImage
   }

}

用法:

override func viewDidLoad() {
    super.viewDidLoad()

    //Sample View To Self.view
    let sampleView = UIView(frame: CGRectMake(100,100,200,200))
    sampleView.backgroundColor =  UIColor(patternImage: UIImage(named: "ic_120x120")!)
    self.view.addSubview(sampleView)    

    //ImageView With Image
    let sampleImageView = UIImageView(frame: CGRectMake(100,400,200,200))

    //sampleView is rendered to sampleImage
    var sampleImage = UIImage.renderUIViewToImage(sampleView)

    sampleImageView.image = sampleImage
    self.view.addSubview(sampleImageView)

 }

所有Swift 3的答案都不适合我,所以我翻译了最受欢迎的答案:

extension UIImage {
    class func imageWithView(view: UIView) -> UIImage {
        UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
        view.layer.render(in: UIGraphicsGetCurrentContext()!)
        let img: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()
        return img!
    }
}

从使用UIGraphicsBeginImageContext切换到使用UIGraphicsBeginImageContextWithOptions(如本页所述)。通过0.0进行缩放(第三个参数),您将获得一个缩放因子与屏幕的缩放因子相等的上下文。

UIGraphicsBeginImageContext使用了一个固定的缩放因子1.0,所以你实际上在iPhone 4和其他iPhone上得到了完全相同的图像。我敢打赌,要么是iPhone 4在你隐式放大它的时候应用了一个过滤器,要么就是你的大脑发现它比周围的所有东西都更不清晰。

所以,我想:

#import <QuartzCore/QuartzCore.h>

+ (UIImage *)imageWithView:(UIView *)view
{
    UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];

    UIImage * img = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    return img;
}

在Swift 4中:

func image(with view: UIView) -> UIImage? {
    UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
    defer { UIGraphicsEndImageContext() }
    if let context = UIGraphicsGetCurrentContext() {
        view.layer.render(in: context)
        let image = UIGraphicsGetImageFromCurrentImageContext()
        return image
    }
    return nil
}

uigraphicsimagerender是一个相对较新的API,在iOS 10中引入。您可以通过指定一个点大小来构造一个uigraphicsimagerender。image方法接受一个闭包参数,并返回执行传递的闭包所产生的位图。在这种情况下,结果是原始图像按比例缩小以在指定的边界内绘制。

https://nshipster.com/image-resizing/

因此,请确保您传递给uigraphicsimagerender的大小是点,而不是像素。

如果图像比预期的要大,则需要将图像大小除以比例因子。