思路

YYKit中关于animatedImage中的gif图片处理过程

首先,gif图片属于YYKit规定的YYImage类型,而通过YYImage利用YYImageDecoder获取这个图片的帧数的信息,由YYImage遵循的YYAnimatedImage协议返回给YYAnimatedImageView.与本身是多张图片的图片数组YYFrameImage不同。

阅读YYKit之YYAnimatedImageView源码

 #define LOCK(...) dispatch_semaphore_wait(self->_lock, DISPATCH_TIME_FOREVER); \__VA_ARGS__; \dispatch_semaphore_signal(self->_lock);#define LOCK_VIEW(...) dispatch_semaphore_wait(view->_lock,  DISPATCH_TIME_FOREVER); \__VA_ARGS__; \dispatch_semaphore_signal(view->_lock);

读源码时看到这两个宏定义及其相关用法,不解其意。后经过上网查询得知,“\”是链接上下句为一个整体,适为宏定义使用,“VA_ARGS ”则代表宏定义多参数,常用于宏定义函数中。故而,上诉的宏定义可使任意操作处于信号量控制中。

YYImage 源码

类方法 + (NSArray *)preferredScales 是针对不同屏幕像素规格预选图片尺寸方法,在NSBundle+YYAdd文件中,NSBundle+YYAdd 是NSBundle的分类。这里涉及到编程规格问题,涉及到 [UIScreen mainScreen]方法的分类应在NSBundle分类。

[UIScreen mainScreen].scale 中scale 是原有图片坐标系转换为设备坐标系,原有坐标系用点来度量,而设备则是用像素来度量。典型的情况,对于retina屏幕来讲,scale值有可能为3.0或2.0,即一个点会被九个或四个像素代替。对于标准分辨率显示,比例因子是1,一个点等于一个像素。

YYImage重写了imageNamed:方法,其中摒弃了imageNamed缓存效果,继而用dataWithContentsOfFile方法获取图片文件路径方式来获取相应数据内容。

YYImageCoder 源码

bit-depth:图像色深,使用多少位来定义一个像素点。bit-depth越大,可以表示的色彩就越多。通常情况下,图像的像素值范围为0-255, 则其bit-depth就是8。RGB图像的bit-depth为24:8bit表示R,8bit表示G,8bit表示B。

YYImageDecoder文件内容

+ (instancetype)decoderWithData:(NSData *)data   scale:(CGFloat)scale {
if (!data) return nil;
YYImageDecoder *decoder = [[YYImageDecoder alloc] initWithScale:scale];
[decoder updateData:data final:YES];
if (decoder.frameCount == 0) return nil;
return decoder;
}- (void)_updateSource {
switch (_type) {case YYImageTypeWebP: {[self _updateSourceWebP];} break;case YYImageTypePNG: {[self _updateSourceAPNG];} break;default: {[self _updateSourceImageIO];} break;}
}

因为本文首先针对gif格式图片进行研究,故而,这里执行的是[self _updateSourceImageIO]; 同志们,终于讲到这里了。(评书用语,借来用用)

 - (void)_updateSourceImageIO {
_width = 0;
_height = 0;
_orientation = UIImageOrientationUp;
_loopCount = 0;
dispatch_semaphore_wait(_framesLock, DISPATCH_TIME_FOREVER);
_frames = nil;
dispatch_semaphore_signal(_framesLock);if (!_source) {if (_finalized) {_source = CGImageSourceCreateWithData((__bridge CFDataRef)_data, NULL);} else {_source = CGImageSourceCreateIncremental(NULL);if (_source) CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, false);}
} else {CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, _finalized);
}
if (!_source) return;_frameCount = CGImageSourceGetCount(_source);
if (_frameCount == 0) return;if (!_finalized) { // ignore multi-frame before finalized_frameCount = 1;
} else {if (_type == YYImageTypePNG) { // use custom apng decoder and ignore multi-frame_frameCount = 1;}if (_type == YYImageTypeGIF) { // get gif loop countCFDictionaryRef properties = CGImageSourceCopyProperties(_source, NULL);if (properties) {CFDictionaryRef gif = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);if (gif) {CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount);if (loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount);}CFRelease(properties);}}
}/*ICO, GIF, APNG may contains multi-frame.*/
NSMutableArray *frames = [NSMutableArray new];
for (NSUInteger i = 0; i < _frameCount; i++) {_YYImageDecoderFrame *frame = [_YYImageDecoderFrame new];frame.index = i;frame.blendFromIndex = i;frame.hasAlpha = YES;frame.isFullSize = YES;[frames addObject:frame];CFDictionaryRef properties = CGImageSourceCopyPropertiesAtIndex(_source, i, NULL);if (properties) {NSTimeInterval duration = 0;NSInteger orientationValue = 0, width = 0, height = 0;CFTypeRef value = NULL;value = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth);if (value) CFNumberGetValue(value, kCFNumberNSIntegerType, &width);value = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight);if (value) CFNumberGetValue(value, kCFNumberNSIntegerType, &height);if (_type == YYImageTypeGIF) {CFDictionaryRef gif = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);if (gif) {// Use the unclamped frame delay if it exists.value = CFDictionaryGetValue(gif, kCGImagePropertyGIFUnclampedDelayTime);if (!value) {// Fall back to the clamped frame delay if the unclamped frame delay does not exist.value = CFDictionaryGetValue(gif, kCGImagePropertyGIFDelayTime);}if (value) CFNumberGetValue(value, kCFNumberDoubleType, &duration);}}frame.width = width;frame.height = height;frame.duration = duration;if (i == 0 && _width + _height == 0) { // init first frame_width = width;_height = height;value = CFDictionaryGetValue(properties, kCGImagePropertyOrientation);if (value) {CFNumberGetValue(value, kCFNumberNSIntegerType, &orientationValue);_orientation = YYUIImageOrientationFromEXIFValue(orientationValue);}}CFRelease(properties);}
}
dispatch_semaphore_wait(_framesLock, DISPATCH_TIME_FOREVER);
_frames = frames;
dispatch_semaphore_signal(_framesLock);
}

上诉方法主要有两部分内容,在执行for循环之前,先利用data创建CGImageSourceRef对象,代码如下:

  if (!_source) {if (_finalized) {_source = CGImageSourceCreateWithData((__bridge CFDataRef)_data, NULL);} else {_source = CGImageSourceCreateIncremental(NULL);if (_source) CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, false);}
} else {CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, _finalized);
}

根据_finalized状态值分别给予创建CGImageSourceRef对象或给CGImageSourceRef对象更新数据。

        IMAGEIO_EXTERN CGImageSourceRef __nullable CGImageSourceCreateWithData(CFDataRef __nonnull data, CFDictionaryRef __nullable options) IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);IMAGEIO_EXTERN CGImageSourceRef __nonnull CGImageSourceCreateIncremental(CFDictionaryRef __nullable options)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);IMAGEIO_EXTERN void CGImageSourceUpdateData(CGImageSourceRef __nonnull isrc, CFDataRef __nonnull data, bool final)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

上诉方法均是imageIO framework内容。

*应注意,创建渐进性_source与非渐进性 _source方式不同。创建渐进性_source应先声明再更新数据。

      _frameCount = CGImageSourceGetCount(_source);
if (_frameCount == 0) return;if (!_finalized) { // ignore multi-frame before finalized_frameCount = 1;
} else {if (_type == YYImageTypePNG) { // use custom apng decoder and ignore multi-frame_frameCount = 1;}if (_type == YYImageTypeGIF) { // get gif loop countCFDictionaryRef properties = CGImageSourceCopyProperties(_source, NULL);if (properties) {CFDictionaryRef gif = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);if (gif) {CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount);if (loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount);}CFRelease(properties);}}
}

接着,获取_source的图像数。

  IMAGEIO_EXTERN size_t CGImageSourceGetCount(CGImageSourceRef __nonnull isrc)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

利用ImageIO中上诉方法可获取图像源数据中的除了缩略图以外的图像数,而且上诉方法不会从PSD图形文件中提取图像层。
利用CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount)方法获取循环播放次数。

并利用if (loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount)方法给_loopCount赋值。

最后,就是利用for循环获取每帧(图像数)的相关属性数据。以及初始化第一帧数据。关键代码如下:

  value = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth)value = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight)value = CFDictionaryGetValue(gif, kCGImagePropertyGIFUnclampedDelayTime);if (!value) {// Fall back to the clamped frame delay if the unclamped frame delay does not exist.value = CFDictionaryGetValue(gif, kCGImagePropertyGIFDelayTime);}

然后便是生成某一帧图像数据方法,代码如下:

      - (CGImageRef)_newUnblendedImageAtIndex:(NSUInteger)indexextendToCanvas:(BOOL)extendToCanvasdecoded:(BOOL *)decoded CF_RETURNS_RETAINED {if (!_finalized && index > 0) return NULL;
if (_frames.count <= index) return NULL;
_YYImageDecoderFrame *frame = _frames[index];if (_source) {CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_source, index, (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});if (imageRef && extendToCanvas) {size_t width = CGImageGetWidth(imageRef);size_t height = CGImageGetHeight(imageRef);if (width == _width && height == _height) {CGImageRef imageRefExtended = YYCGImageCreateDecodedCopy(imageRef, YES);if (imageRefExtended) {CFRelease(imageRef);imageRef = imageRefExtended;if (decoded) *decoded = YES;}} else {CGContextRef context = CGBitmapContextCreate(NULL, _width, _height, 8, 0, YYCGColorSpaceGetDeviceRGB(), kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst);if (context) {CGContextDrawImage(context, CGRectMake(0, _height - height, width, height), imageRef);CGImageRef imageRefExtended = CGBitmapContextCreateImage(context);CFRelease(context);if (imageRefExtended) {CFRelease(imageRef);imageRef = imageRefExtended;if (decoded) *decoded = YES;}}}}return imageRef;
}if (_apngSource) {uint32_t size = 0;uint8_t *bytes = yy_png_copy_frame_data_at_index(_data.bytes, _apngSource, (uint32_t)index, &size);if (!bytes) return NULL;CGDataProviderRef provider = CGDataProviderCreateWithData(bytes, bytes, size, YYCGDataProviderReleaseDataCallback);if (!provider) {free(bytes);return NULL;}bytes = NULL; // hold by providerCGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);if (!source) {CFRelease(provider);return NULL;}CFRelease(provider);if(CGImageSourceGetCount(source) < 1) {CFRelease(source);return NULL;}CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});CFRelease(source);if (!imageRef) return NULL;if (extendToCanvas) {CGContextRef context = CGBitmapContextCreate(NULL, _width, _height, 8, 0, YYCGColorSpaceGetDeviceRGB(), kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst); //bgrAif (context) {CGContextDrawImage(context, CGRectMake(frame.offsetX, frame.offsetY, frame.width, frame.height), imageRef);CFRelease(imageRef);imageRef = CGBitmapContextCreateImage(context);CFRelease(context);if (decoded) *decoded = YES;}}return imageRef;
}#if YYIMAGE_WEBP_ENABLED
if (_webpSource) {WebPIterator iter;if (!WebPDemuxGetFrame(_webpSource, (int)(index + 1), &iter)) return NULL; // demux webp frame data// frame numbers are one-based in webp -----------^int frameWidth = iter.width;int frameHeight = iter.height;if (frameWidth < 1 || frameHeight < 1) return NULL;int width = extendToCanvas ? (int)_width : frameWidth;int height = extendToCanvas ? (int)_height : frameHeight;if (width > _width || height > _height) return NULL;const uint8_t *payload = iter.fragment.bytes;size_t payloadSize = iter.fragment.size;WebPDecoderConfig config;if (!WebPInitDecoderConfig(&config)) {WebPDemuxReleaseIterator(&iter);return NULL;}if (WebPGetFeatures(payload , payloadSize, &config.input) != VP8_STATUS_OK) {WebPDemuxReleaseIterator(&iter);return NULL;}size_t bitsPerComponent = 8;size_t bitsPerPixel = 32;size_t bytesPerRow = YYImageByteAlign(bitsPerPixel / 8 * width, 32);size_t length = bytesPerRow * height;CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst; //bgrAvoid *pixels = calloc(1, length);if (!pixels) {WebPDemuxReleaseIterator(&iter);return NULL;}config.output.colorspace = MODE_bgrA;config.output.is_external_memory = 1;config.output.u.RGBA.rgba = pixels;config.output.u.RGBA.stride = (int)bytesPerRow;config.output.u.RGBA.size = length;VP8StatusCode result = WebPDecode(payload, payloadSize, &config); // decodeif ((result != VP8_STATUS_OK) && (result != VP8_STATUS_NOT_ENOUGH_DATA)) {WebPDemuxReleaseIterator(&iter);free(pixels);return NULL;}WebPDemuxReleaseIterator(&iter);if (extendToCanvas && (iter.x_offset != 0 || iter.y_offset != 0)) {void *tmp = calloc(1, length);if (tmp) {vImage_Buffer src = {pixels, height, width, bytesPerRow};vImage_Buffer dest = {tmp, height, width, bytesPerRow};vImage_CGAffineTransform transform = {1, 0, 0, 1, iter.x_offset, -iter.y_offset};uint8_t backColor[4] = {0};vImage_Error error = vImageAffineWarpCG_ARGB8888(&src, &dest, NULL, &transform, backColor, kvImageBackgroundColorFill);if (error == kvImageNoError) {memcpy(pixels, tmp, length);}free(tmp);}}CGDataProviderRef provider = CGDataProviderCreateWithData(pixels, pixels, length, YYCGDataProviderReleaseDataCallback);if (!provider) {free(pixels);return NULL;}pixels = NULL; // hold by providerCGImageRef image = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, YYCGColorSpaceGetDeviceRGB(), bitmapInfo, provider, NULL, false, kCGRenderingIntentDefault);CFRelease(provider);if (decoded) *decoded = YES;return image;
}#endifreturn NULL;
}

研究上诉代码,发现可通过如下方法以特定规格要求获取图像数据源中的某一帧图像数据。

IMAGEIO_EXTERN CGImageRef __nullable CGImageSourceCreateImageAtIndex(CGImageSourceRef __nonnull isrc, size_t index, CFDictionaryRef __nullable options)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

如源码中便是规定了图片压缩时应缓存这个属性:CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_source, index, (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});

接着获取该帧图像数据的宽高属性:

        size_t width = CGImageGetWidth(imageRef);size_t height = CGImageGetHeight(imageRef);

创建新的图像数据

CGImageRef YYCGImageCreateDecodedCopy(CGImageRef imageRef, BOOL decodeForDisplay) {
if (!imageRef) return NULL;
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
if (width == 0 || height == 0) return NULL;if (decodeForDisplay) { //decode with redraw (may lose some precision)CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef) & kCGBitmapAlphaInfoMask;BOOL hasAlpha = NO;if (alphaInfo == kCGImageAlphaPremultipliedLast ||alphaInfo == kCGImageAlphaPremultipliedFirst ||alphaInfo == kCGImageAlphaLast ||alphaInfo == kCGImageAlphaFirst) {hasAlpha = YES;}// BGRA8888 (premultiplied) or BGRX8888// same as UIGraphicsBeginImageContext() and -[UIView drawRect:]CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, YYCGColorSpaceGetDeviceRGB(), bitmapInfo);if (!context) return NULL;CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); // decodeCGImageRef newImage = CGBitmapContextCreateImage(context);CFRelease(context);return newImage;} else {CGColorSpaceRef space = CGImageGetColorSpace(imageRef);size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);if (bytesPerRow == 0 || width == 0 || height == 0) return NULL;CGDataProviderRef dataProvider = CGImageGetDataProvider(imageRef);if (!dataProvider) return NULL;CFDataRef data = CGDataProviderCopyData(dataProvider); // decodeif (!data) return NULL;CGDataProviderRef newProvider = CGDataProviderCreateWithCFData(data);CFRelease(data);if (!newProvider) return NULL;CGImageRef newImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, space, bitmapInfo, newProvider, NULL, false, kCGRenderingIntentDefault);CFRelease(newProvider);return newImage;
}}

在_frameAtIndex: decodeForDisplay:方法中,利用获取的图形数据创建新的image

 CGImageRef imageRef = [self _newUnblendedImageAtIndex:index extendToCanvas:extendToCanvas decoded:&decoded];if (!imageRef) return nil;if (decodeForDisplay && !decoded) {CGImageRef imageRefDecoded = YYCGImageCreateDecodedCopy(imageRef, YES);if (imageRefDecoded) {CFRelease(imageRef);imageRef = imageRefDecoded;decoded = YES;}}UIImage *image = [UIImage imageWithCGImage:imageRef scale:_scale orientation:_orientation];

在decoder获取了相关信息后,在YYImage中便可针对每一帧获取其内存大小进而获取整个动态图(如果是的话)的内存大小

  if (decoder.frameCount > 1) {_decoder = decoder;_bytesPerFrame = CGImageGetBytesPerRow(image.CGImage) * CGImageGetHeight(image.CGImage);_animatedImageMemorySize = _bytesPerFrame * decoder.frameCount;}

而通过YYAnimatedImage协议,YYImage可以将相关信息给相关对象。

          #pragma mark - protocol YYAnimatedImage- (NSUInteger)animatedImageFrameCount {
return _decoder.frameCount;}- (NSUInteger)animatedImageLoopCount {
return _decoder.loopCount;}- (NSUInteger)animatedImageBytesPerFrame {
return _bytesPerFrame;
}- (UIImage *)animatedImageFrameAtIndex:(NSUInteger)index {
if (index >= _decoder.frameCount) return nil;
dispatch_semaphore_wait(_preloadedLock, DISPATCH_TIME_FOREVER);
UIImage *image = _preloadedFrames[index];
dispatch_semaphore_signal(_preloadedLock);
if (image) return image == (id)[NSNull null] ? nil : image;
return [_decoder frameAtIndex:index decodeForDisplay:YES].image;
}- (NSTimeInterval)animatedImageDurationAtIndex:(NSUInteger)index {
NSTimeInterval duration = [_decoder frameDurationAtIndex:index];/*http://opensource.apple.com/source/WebCore/WebCore-7600.1.25/platform/graphics/cg/ImageSourceCG.cppMany annoying ads specify a 0 duration to make an image flash as quickly as possible. We follow Safari and Firefox's behavior and use a duration of 100 ms for any frames that specify a duration of <= 10 ms.See <rdar://problem/7689300> and <http://webkit.org/b/36082> for more information.See also: http://nullsleep.tumblr.com/post/16524517190/animated-gif-minimum-frame-delay-browser.*/
if (duration < 0.011f) return 0.100f;
return duration;}

也是孤陋寡闻,代理还可以这样用。这样就只要curAnimatedImage遵循协议即可。

   _totalLoop = _curAnimatedImage.animatedImageLoopCount;_totalFrameCount = _curAnimatedImage.animatedImageFrameCount;

PS:一千个读者就有一千个哈勒姆特,优秀的源码可以全方位的提高自己,拿这次对YYAnimatedImageView实现对gif文件展示的学习过程来说,大大小小的总结性文件就有十几个之多,个人怕贻笑大方,就不一一贴出来了。在学习期间随着研读内容的深入对ibireme愈加佩服,无论是代码风格还是其C++的纯熟都令人佩服。

阅读YYKit之YYImage实现gif展示相关推荐

  1. canvas-vue彩色丝带展示

    canvas-vue彩色丝带展示 目录 文章目录 前言 推荐阅读 原版代码 自用版本 结果展示 前言 着重点在:位置的选择和色彩的选择 推荐阅读 绘制随机不规则三角彩条--小谈EvanYou个人主页的 ...

  2. 课堂教学实践研究之人教版九年级上册“阅读与思考”《旋转对称》

    2022年9月20日,"初中数学'阅读材料'教学实践"课题组在海门区树勋初中一起研讨如何有效利用人教版初中数学"阅读与思考"的材料培养学生的数学阅读能力,活动分 ...

  3. 浅谈ipad阅读类应用设计

    自古以来,人们从阅读中了解最新资讯,学习知识,陶冶情操.随着社会和科技的发展,新的阅读设备,阅读方式,丰富的多媒体展示,让阅读这一人类行为更加高效化和多样化.对于平板电脑这个较新的媒介,我们如何能进一 ...

  4. 【应用】Markdown 在线阅读器

    前言 一款在线的 Markdown 阅读器,主要用来展示 Markdown 内容.支持 HTML 导出,同时可以方便的添加扩展功能.在这个阅读器的基础又做了一款在线 Github Pages 页面生成 ...

  5. Zotero 和它的朋友们: 一个文献阅读生态

    上周用 Mendeley 整理论文的时候, 一不小心把本地论文全给弄乱了, 已经是第二次, 头都大了. 一气之下, 打算换一款文献管理软件, 于是转投了 Zotero. 用了一个星期, 乐不思 Men ...

  6. 阅读类APP会员页竞品分析

    阅读类APP会员页竞品分析 项目背景 竞品描述 竞品目标功能对比 购买入口 会员介绍页 会员购买页与支付方式 联合会员 会员常驻推广入口 总结 项目背景 市面上越来越多大厂涌进了阅读行业,会员作为商业 ...

  7. iPad阅读应用横向评测: 普通2B文学青年的碰撞

    自古以来,人们可以在书籍中了解到最新的资讯.学习知识.陶冶情操.随着时代和科技的进步,智能手机以及平板电脑等移动设备的出现,已经开始或多或少的影响了人们的阅读方式,丰富的多媒体展示形式已经让很大一部分 ...

  8. 成都市武侯区计算机实验小学校长,成都市武侯区群文阅读研究活动在棕北小学召开...

    为了在实践中树立关于叙事性作品(寓言)群文教学的课堂建构与课堂实施的意识,2019年11月26日,成都市棕北小学携手成都市龙江路小学中粮祥云分校.成都市武侯计算机实验小学开展了一场群文阅读叙事性作品( ...

  9. 如何在Kobo电子书阅读器中添加自定义屏保

    If you're not particularly impressed with the screensaver system on your Kobo Ebook reader you can c ...

  10. 自制有声书阅读器:用PaddleSpeech打开读书新方式

    吕声辉,飞桨开发者技术专家(PPDE),某网络科技公司研发工程师.主要研究方向为图像识别,自然语言处理等. • AI Studio主页 https://aistudio.baidu.com/aistu ...

最新文章

  1. 9文一览:近期必读微生物组生信论文
  2. ad远程控制用户计算机,远程控制电脑|远程控制计算机|怎样远程控制电脑 - ManageEngine Remote Access Plus...
  3. Network Broadcast
  4. SVN和Git的比较
  5. net core WebApi——使用xUnits来实现单元测试
  6. nacos启动失败:org.springframework.boot.web.server.WebServerExceptio
  7. 安卓恶意软件Skygofree爆发,连你的照片都能监控到
  8. WebSphere的管理员界面
  9. html留言页面设计,html的留言板制作(js)
  10. 多电压等级计算机潮流计算,电力系统稳态分析教学心得
  11. Android-组件化开发
  12. 古琴怎么学——【唐畅古琴】
  13. 测试面试LeetCode系列:宝石与石头
  14. 【免费分享】论文查重软件(亲测好用!)
  15. 专题:2019世界移动通信大会(MWC)精彩纷呈,中国企业各出大招
  16. Plotly.js使用详细介绍(折线图、饼状图、点图、水平条形图、桑基图、树状图、等值线图)
  17. Android拼接合并图片生成长图
  18. IPFS -- 节点搭建
  19. 云小蜜人工智能训练师
  20. [分享]iOS开发 - 网络总结

热门文章

  1. pascal 一些常用函数
  2. iPhone X 不充电维修案例
  3. 漏洞扫描器——nmap的使用
  4. 求助wpe封包遇到动态验证怎么办
  5. 【AI志愿超强攻略】中国高校人工智能专业最全院校排名课程对比
  6. 移动硬盘与电脑连接后 计算机中找不到,移动硬盘不显示盘符怎么办 移动硬盘显示不出来解决方法【详解】...
  7. SPSS教程:单因素重复测量方差分析,超详细图文教程
  8. 全民免费吃鸡,驱动人生带你玩转PUBG
  9. Codeforces edu 88(A~E)
  10. attachEvent与addEventlistener兼容性