我的代码在普通设备上工作正常,但在视网膜设备上产生模糊的图像。
有人知道我的问题的解决方案吗?
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
从使用UIGraphicsBeginImageContext切换到使用UIGraphicsBeginImageContextWithOptions(如本页所述)。通过0.0进行缩放(第三个参数),您将获得一个缩放因子与屏幕的缩放因子相等的上下文。
UIGraphicsBeginImageContext使用了一个固定的缩放因子1.0,所以你实际上在iPhone 4和其他iPhone上得到了完全相同的图像。我敢打赌,要么是iPhone 4在你隐式放大它的时候应用了一个过滤器,要么就是你的大脑发现它比周围的所有东西都更不清晰。
所以,我想:
#import <QuartzCore/QuartzCore.h>
+ (UIImage *)imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
在Swift 4中:
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
斯威夫特2.0:
使用扩展方法:
extension UIImage{
class func renderUIViewToImage(viewToBeRendered:UIView?) -> UIImage
{
UIGraphicsBeginImageContextWithOptions((viewToBeRendered?.bounds.size)!, false, 0.0)
viewToBeRendered!.drawViewHierarchyInRect(viewToBeRendered!.bounds, afterScreenUpdates: true)
viewToBeRendered!.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
用法:
override func viewDidLoad() {
super.viewDidLoad()
//Sample View To Self.view
let sampleView = UIView(frame: CGRectMake(100,100,200,200))
sampleView.backgroundColor = UIColor(patternImage: UIImage(named: "ic_120x120")!)
self.view.addSubview(sampleView)
//ImageView With Image
let sampleImageView = UIImageView(frame: CGRectMake(100,400,200,200))
//sampleView is rendered to sampleImage
var sampleImage = UIImage.renderUIViewToImage(sampleView)
sampleImageView.image = sampleImage
self.view.addSubview(sampleImageView)
}
从使用UIGraphicsBeginImageContext切换到使用UIGraphicsBeginImageContextWithOptions(如本页所述)。通过0.0进行缩放(第三个参数),您将获得一个缩放因子与屏幕的缩放因子相等的上下文。
UIGraphicsBeginImageContext使用了一个固定的缩放因子1.0,所以你实际上在iPhone 4和其他iPhone上得到了完全相同的图像。我敢打赌,要么是iPhone 4在你隐式放大它的时候应用了一个过滤器,要么就是你的大脑发现它比周围的所有东西都更不清晰。
所以,我想:
#import <QuartzCore/QuartzCore.h>
+ (UIImage *)imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
在Swift 4中:
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
插入式Swift 3.0扩展,支持新的iOS 10.0 API和以前的方法。
注意:
iOS版本检查
注意使用defer来简化上下文清理。
还将应用视图的不透明度和当前比例。
没有任何东西只是使用!这可能会导致坠机。
extension UIView
{
public func renderToImage(afterScreenUpdates: Bool = false) -> UIImage?
{
if #available(iOS 10.0, *)
{
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.scale = self.layer.contentsScale
rendererFormat.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(size: self.bounds.size, format: rendererFormat)
return
renderer.image
{
_ in
self.drawHierarchy(in: self.bounds, afterScreenUpdates: afterScreenUpdates)
}
}
else
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, self.layer.contentsScale)
defer
{
UIGraphicsEndImageContext()
}
self.drawHierarchy(in: self.bounds, afterScreenUpdates: afterScreenUpdates)
return UIGraphicsGetImageFromCurrentImageContext()
}
}
}