报错信息

[-] Error: while downloading file([-] Error: while reading from socket: (timed out)).

原因

实际测试 recv_size有可能大于 buffer_size,那么就会存在 文件实际已经下载完成,但是 remain_bytes大于0,导致因为一直等待而超时。

报错相关源码

调试发现下载文件失败的原因在下面这段代码:

def tcp_recv_file(conn, local_filename, file_size, buffer_size=1024):'''Receive file from server, fragmented it while receiving and write to disk.arguments:@conn: connection@local_filename: string@file_size: int, remote file size@buffer_size: int, receive buffer size@Return int: file size if success else raise ConnectionError.'''total_file_size = 0flush_size = 0remain_bytes = file_sizewith open(local_filename, 'wb+') as f:while remain_bytes > 0:try:if remain_bytes >= buffer_size:file_buffer, recv_size = tcp_recv_response(conn, buffer_size, buffer_size)else:file_buffer, recv_size = tcp_recv_response(conn, remain_bytes, buffer_size)f.write(file_buffer)remain_bytes -= buffer_sizetotal_file_size += recv_sizeflush_size += recv_sizeif flush_size >= 4096:f.flush()flush_size = 0except ConnectionError as e:raise ConnectionError('[-] Error: while downloading file(%s).' % e.args)except IOError as e:raise DataError('[-] Error: while writting local file(%s).' % e.args)return total_file_size

异常代码处:remain_bytes -= buffer_size

修改好的代码见 GitHub:https://github.com/Daphnis-z/py3fdfs-pypi.org

修复方法

因此仅仅修改了包内的 fdfs_client/storage_client.py文件,因此有两种使用方式:

1. 可以直接从 GitHub下载源码进行安装

2. 替换 storage_client.py 代码文件,

具体路径是:[Python安装路径]/site-packages/fdfs_client/

替换的 storage_client.py 代码

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# filename: storage_client.pyimport os
import stat
import errno
import struct
import socket
import datetime
import platformfrom fdfs_client.fdfs_protol import *
from fdfs_client.connection import *
# from test_fdfs.sendfile import *
from fdfs_client.exceptions import (FDFSError,ConnectionError,ResponseError,InvaildResponse,DataError
)
from fdfs_client.utils import *__os_sep__ = "/" if platform.system() == 'Windows' else os.sepdef tcp_send_file(conn, filename, buffer_size=1024):'''Send file to server, and split into multiple pkgs while sending.arguments:@conn: connection@filename: string@buffer_size: int ,send buffer size@Return int: file size if success else raise ConnectionError.'''file_size = 0with open(filename, 'rb') as f:while 1:try:send_buffer = f.read(buffer_size)send_size = len(send_buffer)if send_size == 0:breaktcp_send_data(conn, send_buffer)file_size += send_sizeexcept ConnectionError as e:raise ConnectionError('[-] Error while uploading file(%s).' % e.args)except IOError as e:raise DataError('[-] Error while reading local file(%s).' % e.args)return file_sizedef tcp_send_file_ex(conn, filename, buffer_size=4096):'''Send file to server. Using linux system call 'sendfile'.arguments:@conn: connection@filename: string@return long, sended size'''if 'linux' not in sys.platform.lower():raise DataError('[-] Error: \'sendfile\' system call only available on linux.')nbytes = 0offset = 0sock_fd = conn.get_sock().fileno()with open(filename, 'rb') as f:in_fd = f.fileno()while 1:try:pass# sent = sendfile(sock_fd, in_fd, offset, buffer_size)# if 0 == sent:#     break# nbytes += sent# offset += sentexcept OSError as e:if e.errno == errno.EAGAIN:continueraisereturn nbytesdef tcp_recv_file(conn, local_filename, file_size, buffer_size=1024):'''Receive file from server, fragmented it while receiving and write to disk.arguments:@conn: connection@local_filename: string@file_size: int, remote file size@buffer_size: int, receive buffer size@Return int: file size if success else raise ConnectionError.'''total_file_size = 0flush_size = 0remain_bytes = file_sizewith open(local_filename, 'wb+') as f:while remain_bytes > 0:try:if remain_bytes >= buffer_size:file_buffer, recv_size = tcp_recv_response(conn, buffer_size, buffer_size)else:file_buffer, recv_size = tcp_recv_response(conn, remain_bytes, buffer_size)f.write(file_buffer)remain_bytes -= recv_sizetotal_file_size += recv_sizeflush_size += recv_sizeif flush_size >= 4096:f.flush()flush_size = 0except ConnectionError as e:raise ConnectionError('[-] Error: while downloading file(%s).' % e.args)except IOError as e:raise DataError('[-] Error: while writting local file(%s).' % e.args)return total_file_sizeclass Storage_client(object):'''The Class Storage_client for storage server.Note: argument host_tuple of storage server ip address, that should be a single element.'''def __init__(self, *kwargs):conn_kwargs = {'name': 'Storage Pool','host_tuple': (kwargs[0],),'port': kwargs[1],'timeout': kwargs[2]}self.pool = ConnectionPool(**conn_kwargs)return Nonedef __del__(self):try:self.pool.destroy()self.pool = Noneexcept:passdef update_pool(self, old_store_serv, new_store_serv, timeout=30):'''Update connection pool of storage client.We need update connection pool of storage client, while storage server is changed.but if server not changed, we do nothing.'''if old_store_serv.ip_addr == new_store_serv.ip_addr:return Noneself.pool.destroy()conn_kwargs = {'name': 'Storage_pool','host_tuple': (new_store_serv.ip_addr,),'port': new_store_serv.port,'timeout': timeout}self.pool = ConnectionPool(**conn_kwargs)return Truedef _storage_do_upload_file(self, tracker_client, store_serv, file_buffer, file_size=None, upload_type=None,meta_dict=None, cmd=None, master_filename=None, prefix_name=None, file_ext_name=None):'''core of upload file.arguments:@tracker_client: Tracker_client, it is useful connect to tracker server@store_serv: Storage_server, it is return from query tracker server@file_buffer: string, file name or file buffer for send@file_size: int@upload_type: int, optional: FDFS_UPLOAD_BY_FILE, FDFS_UPLOAD_BY_FILENAME,FDFS_UPLOAD_BY_BUFFER@meta_dic: dictionary, store metadata in it@cmd: int, reference fdfs protol@master_filename: string, useful upload slave file@prefix_name: string@file_ext_name: string@Return dictionary {'Group name'      : group_name,'Remote file_id'  : remote_file_id,'Status'          : status,'Local file name' : local_filename,'Uploaded size'   : upload_size,'Storage IP'      : storage_ip}'''store_conn = self.pool.get_connection()th = Tracker_header()master_filename_len = len(master_filename) if master_filename else 0prefix_name_len = len(prefix_name) if prefix_name else 0upload_slave = len(store_serv.group_name) and master_filename_lenfile_ext_name = str(file_ext_name) if file_ext_name else ''# non_slave_fmt |-store_path_index(1)-file_size(8)-file_ext_name(6)-|non_slave_fmt = '!B Q %ds' % FDFS_FILE_EXT_NAME_MAX_LEN# slave_fmt |-master_len(8)-file_size(8)-prefix_name(16)-file_ext_name(6)#           -master_name(master_filename_len)-|slave_fmt = '!Q Q %ds %ds %ds' % (FDFS_FILE_PREFIX_MAX_LEN, FDFS_FILE_EXT_NAME_MAX_LEN, master_filename_len)th.pkg_len = struct.calcsize(slave_fmt) if upload_slave else struct.calcsize(non_slave_fmt)th.pkg_len += file_sizeth.cmd = cmdth.send_header(store_conn)if upload_slave:send_buffer = struct.pack(slave_fmt, master_filename_len, file_size, prefix_name, file_ext_name, master_filename)else:send_buffer = struct.pack(non_slave_fmt, store_serv.store_path_index, file_size, file_ext_name.encode())try:tcp_send_data(store_conn, send_buffer)if upload_type == FDFS_UPLOAD_BY_FILENAME:send_file_size = tcp_send_file(store_conn, file_buffer)elif upload_type == FDFS_UPLOAD_BY_BUFFER:tcp_send_data(store_conn, file_buffer)elif upload_type == FDFS_UPLOAD_BY_FILE:send_file_size = tcp_send_file_ex(store_conn, file_buffer)th.recv_header(store_conn)if th.status != 0:raise DataError('[-] Error: %d, %s' % (th.status, os.strerror(th.status)))recv_buffer, recv_size = tcp_recv_response(store_conn, th.pkg_len)if recv_size <= FDFS_GROUP_NAME_MAX_LEN:errmsg = '[-] Error: Storage response length is not match, 'errmsg += 'expect: %d, actual: %d' % (th.pkg_len, recv_size)raise ResponseError(errmsg)# recv_fmt: |-group_name(16)-remote_file_name(recv_size - 16)-|recv_fmt = '!%ds %ds' % (FDFS_GROUP_NAME_MAX_LEN, th.pkg_len - FDFS_GROUP_NAME_MAX_LEN)(group_name, remote_name) = struct.unpack(recv_fmt, recv_buffer)remote_filename = remote_name.strip(b'\x00')if meta_dict and len(meta_dict) > 0:status = self.storage_set_metadata(tracker_client, store_serv, remote_filename, meta_dict)if status != 0:# rollbackself.storage_delete_file(tracker_client, store_serv, remote_filename)raise DataError('[-] Error: %d, %s' % (status, os.strerror(status)))except:raisefinally:self.pool.release(store_conn)ret_dic = {'Group name': group_name.strip(b'\x00'),'Remote file_id': group_name.strip(b'\x00') + __os_sep__.encode() + remote_filename,'Status': 'Upload successed.','Local file name': file_buffer if (upload_type == FDFS_UPLOAD_BY_FILENAMEor upload_type == FDFS_UPLOAD_BY_FILE) else '','Uploaded size': appromix(send_file_size) if (upload_type == FDFS_UPLOAD_BY_FILENAMEor upload_type == FDFS_UPLOAD_BY_FILE) else appromix(len(file_buffer)),'Storage IP': store_serv.ip_addr}return ret_dicdef storage_upload_by_filename(self, tracker_client, store_serv, filename, meta_dict=None):file_size = os.stat(filename).st_sizefile_ext_name = get_file_ext_name(filename)return self._storage_do_upload_file(tracker_client, store_serv, filename, file_size, FDFS_UPLOAD_BY_FILENAME,meta_dict, STORAGE_PROTO_CMD_UPLOAD_FILE, None, None, file_ext_name)def storage_upload_by_file(self, tracker_client, store_serv, filename, meta_dict=None):file_size = os.stat(filename).st_sizefile_ext_name = get_file_ext_name(filename)return self._storage_do_upload_file(tracker_client, store_serv, filename, file_size, FDFS_UPLOAD_BY_FILE,meta_dict, STORAGE_PROTO_CMD_UPLOAD_FILE, None, None, file_ext_name)def storage_upload_by_buffer(self, tracker_client, store_serv, file_buffer, file_ext_name=None, meta_dict=None):buffer_size = len(file_buffer)return self._storage_do_upload_file(tracker_client, store_serv, file_buffer, buffer_size, FDFS_UPLOAD_BY_BUFFER,meta_dict, STORAGE_PROTO_CMD_UPLOAD_FILE, None, None, file_ext_name)def storage_upload_slave_by_filename(self, tracker_client, store_serv, filename, prefix_name, remote_filename,meta_dict=None):file_size = os.stat(filename).st_sizefile_ext_name = get_file_ext_name(filename)return self._storage_do_upload_file(tracker_client, store_serv, filename, file_size, FDFS_UPLOAD_BY_FILENAME,meta_dict, STORAGE_PROTO_CMD_UPLOAD_SLAVE_FILE, remote_filename,prefix_name, file_ext_name)def storage_upload_slave_by_file(self, tracker_client, store_serv, filename, prefix_name, remote_filename,meta_dict=None):file_size = os.stat(filename).st_sizefile_ext_name = get_file_ext_name(filename)return self._storage_do_upload_file(tracker_client, store_serv, filename, file_size, FDFS_UPLOAD_BY_FILE,meta_dict, STORAGE_PROTO_CMD_UPLOAD_SLAVE_FILE, remote_filename,prefix_name, file_ext_name)def storage_upload_slave_by_buffer(self, tracker_client, store_serv, filebuffer, remote_filename, meta_dict,file_ext_name):file_size = len(filebuffer)return self._storage_do_upload_file(tracker_client, store_serv, filebuffer, file_size, FDFS_UPLOAD_BY_BUFFER,meta_dict, STORAGE_PROTO_CMD_UPLOAD_SLAVE_FILE, None, remote_filename,file_ext_name)def storage_upload_appender_by_filename(self, tracker_client, store_serv, filename, meta_dict=None):file_size = os.stat(filename).st_sizefile_ext_name = get_file_ext_name(filename)return self._storage_do_upload_file(tracker_client, store_serv, filename, file_size, FDFS_UPLOAD_BY_FILENAME,meta_dict, STORAGE_PROTO_CMD_UPLOAD_APPENDER_FILE, None, None,file_ext_name)def storage_upload_appender_by_file(self, tracker_client, store_serv, filename, meta_dict=None):file_size = os.stat(filename).st_sizefile_ext_name = get_file_ext_name(filename)return self._storage_do_upload_file(tracker_client, store_serv, filename, file_size, FDFS_UPLOAD_BY_FILE,meta_dict, STORAGE_PROTO_CMD_UPLOAD_APPENDER_FILE, None, None,file_ext_name)def storage_upload_appender_by_buffer(self, tracker_client, store_serv, file_buffer, meta_dict=None,file_ext_name=None):file_size = len(file_buffer)return self._storage_do_upload_file(tracker_client, store_serv, file_buffer, file_size, FDFS_UPLOAD_BY_BUFFER,meta_dict, STORAGE_PROTO_CMD_UPLOAD_APPENDER_FILE, None, None,file_ext_name)def storage_delete_file(self, tracker_client, store_serv, remote_filename):'''Delete file from storage server.'''store_conn = self.pool.get_connection()th = Tracker_header()th.cmd = STORAGE_PROTO_CMD_DELETE_FILEfile_name_len = len(remote_filename)th.pkg_len = FDFS_GROUP_NAME_MAX_LEN + file_name_lentry:th.send_header(store_conn)# del_fmt: |-group_name(16)-filename(len)-|del_fmt = '!%ds %ds' % (FDFS_GROUP_NAME_MAX_LEN, file_name_len)send_buffer = struct.pack(del_fmt, store_serv.group_name, remote_filename)tcp_send_data(store_conn, send_buffer)th.recv_header(store_conn)# if th.status == 2:#    raise DataError('[-] Error: remote file %s is not exist.'#                    % (store_serv.group_name + __os_sep__.encode() + remote_filename))if th.status != 0:raise DataError('Error: %d, %s' % (th.status, os.strerror(th.status)))# recv_buffer, recv_size = tcp_recv_response(store_conn, th.pkg_len)except:raisefinally:self.pool.release(store_conn)remote_filename = store_serv.group_name + __os_sep__.encode() + remote_filenamereturn ('Delete file successed.', remote_filename, store_serv.ip_addr)def _storage_do_download_file(self, tracker_client, store_serv, file_buffer, offset, download_size,download_type, remote_filename):'''Core of download file from storage server.You can choice download type, optional FDFS_DOWNLOAD_TO_FILE or FDFS_DOWNLOAD_TO_BUFFER. And you can choice file offset.@Return dictionary'Remote file name' : remote_filename,'Content' : local_filename or buffer,'Download size'   : download_size,'Storage IP'      : storage_ip'''store_conn = self.pool.get_connection()th = Tracker_header()remote_filename_len = len(remote_filename)th.pkg_len = FDFS_PROTO_PKG_LEN_SIZE * 2 + FDFS_GROUP_NAME_MAX_LEN + remote_filename_lenth.cmd = STORAGE_PROTO_CMD_DOWNLOAD_FILEtry:th.send_header(store_conn)# down_fmt: |-offset(8)-download_bytes(8)-group_name(16)-remote_filename(len)-|down_fmt = '!Q Q %ds %ds' % (FDFS_GROUP_NAME_MAX_LEN, remote_filename_len)send_buffer = struct.pack(down_fmt, offset, download_size, store_serv.group_name, remote_filename)tcp_send_data(store_conn, send_buffer)th.recv_header(store_conn)# if th.status == 2:#    raise DataError('[-] Error: remote file %s is not exist.' % #                    (store_serv.group_name + __os_sep__.encode() + remote_filename))if th.status != 0:raise DataError('Error: %d %s' % (th.status, os.strerror(th.status)))if download_type == FDFS_DOWNLOAD_TO_FILE:total_recv_size = tcp_recv_file(store_conn, file_buffer, th.pkg_len)elif download_type == FDFS_DOWNLOAD_TO_BUFFER:recv_buffer, total_recv_size = tcp_recv_response(store_conn, th.pkg_len)except:raisefinally:self.pool.release(store_conn)ret_dic = {'Remote file_id': store_serv.group_name + __os_sep__.encode() + remote_filename,'Content': file_buffer if download_type == FDFS_DOWNLOAD_TO_FILE else recv_buffer,'Download size': appromix(total_recv_size),'Storage IP': store_serv.ip_addr}return ret_dicdef storage_download_to_file(self, tracker_client, store_serv, local_filename, file_offset, download_bytes,remote_filename):return self._storage_do_download_file(tracker_client, store_serv, local_filename, file_offset, download_bytes,FDFS_DOWNLOAD_TO_FILE, remote_filename)def storage_download_to_buffer(self, tracker_client, store_serv, file_buffer, file_offset, download_bytes,remote_filename):return self._storage_do_download_file(tracker_client, store_serv, file_buffer, file_offset, download_bytes,FDFS_DOWNLOAD_TO_BUFFER, remote_filename)def storage_set_metadata(self, tracker_client, store_serv, remote_filename, meta_dict,op_flag=STORAGE_SET_METADATA_FLAG_OVERWRITE):ret = 0conn = self.pool.get_connection()remote_filename_len = len(remote_filename)meta_buffer = fdfs_pack_metadata(meta_dict)meta_len = len(meta_buffer)th = Tracker_header()th.pkg_len = FDFS_PROTO_PKG_LEN_SIZE * 2 + 1 + FDFS_GROUP_NAME_MAX_LEN + remote_filename_len + meta_lenth.cmd = STORAGE_PROTO_CMD_SET_METADATAtry:th.send_header(conn)# meta_fmt: |-filename_len(8)-meta_len(8)-op_flag(1)-group_name(16)#           -filename(remote_filename_len)-meta(meta_len)|meta_fmt = '!Q Q c %ds %ds %ds' % (FDFS_GROUP_NAME_MAX_LEN, remote_filename_len, meta_len)send_buffer = struct.pack(meta_fmt, remote_filename_len, meta_len, op_flag, store_serv.group_name,remote_filename, meta_buffer)tcp_send_data(conn, send_buffer)th.recv_header(conn)if th.status != 0:ret = th.statusexcept:raisefinally:self.pool.release(conn)return retdef storage_get_metadata(self, tracker_client, store_serv, remote_file_name):store_conn = self.pool.get_connection()th = Tracker_header()remote_filename_len = len(remote_file_name)th.pkg_len = FDFS_GROUP_NAME_MAX_LEN + remote_filename_lenth.cmd = STORAGE_PROTO_CMD_GET_METADATAtry:th.send_header(store_conn)# meta_fmt: |-group_name(16)-filename(remote_filename_len)-|meta_fmt = '!%ds %ds' % (FDFS_GROUP_NAME_MAX_LEN, remote_filename_len)send_buffer = struct.pack(meta_fmt, store_serv.group_name, remote_file_name.encode())tcp_send_data(store_conn, send_buffer)th.recv_header(store_conn)# if th.status == 2:#    raise DataError('[-] Error: Remote file %s has no meta data.'#                    % (store_serv.group_name + __os_sep__.encode() + remote_file_name))if th.status != 0:raise DataError('[-] Error:%d, %s' % (th.status, os.strerror(th.status)))if th.pkg_len == 0:ret_dict = {}meta_buffer, recv_size = tcp_recv_response(store_conn, th.pkg_len)except:raisefinally:self.pool.release(store_conn)ret_dict = fdfs_unpack_metadata(meta_buffer)return ret_dictdef _storage_do_append_file(self, tracker_client, store_serv, file_buffer, file_size, upload_type,appended_filename):store_conn = self.pool.get_connection()th = Tracker_header()appended_filename_len = len(appended_filename)th.pkg_len = FDFS_PROTO_PKG_LEN_SIZE * 2 + appended_filename_len + file_sizeth.cmd = STORAGE_PROTO_CMD_APPEND_FILEtry:th.send_header(store_conn)# append_fmt: |-appended_filename_len(8)-file_size(8)-appended_filename(len)#             -filecontent(filesize)-|append_fmt = '!Q Q %ds' % appended_filename_lensend_buffer = struct.pack(append_fmt, appended_filename_len, file_size, appended_filename)tcp_send_data(store_conn, send_buffer)if upload_type == FDFS_UPLOAD_BY_FILENAME:tcp_send_file(store_conn, file_buffer)elif upload_type == FDFS_UPLOAD_BY_BUFFER:tcp_send_data(store_conn, file_buffer)elif upload_type == FDFS_UPLOAD_BY_FILE:tcp_send_file_ex(store_conn, file_buffer)th.recv_header(store_conn)if th.status != 0:raise DataError('[-] Error: %d, %s' % (th.status, os.strerror(th.status)))except:raisefinally:self.pool.release(store_conn)ret_dict = {}ret_dict['Status'] = 'Append file successed.'ret_dict['Appender file name'] = store_serv.group_name + __os_sep__.encode() + appended_filenameret_dict['Appended size'] = appromix(file_size)ret_dict['Storage IP'] = store_serv.ip_addrreturn ret_dictdef storage_append_by_filename(self, tracker_client, store_serv, local_filename, appended_filename):file_size = os.stat(local_filename).st_sizereturn self._storage_do_append_file(tracker_client, store_serv, local_filename, file_size,FDFS_UPLOAD_BY_FILENAME, appended_filename)def storage_append_by_file(self, tracker_client, store_serv, local_filename, appended_filename):file_size = os.stat(local_filename).st_sizereturn self._storage_do_append_file(tracker_client, store_serv, local_filename, file_size, FDFS_UPLOAD_BY_FILE,appended_filename)def storage_append_by_buffer(self, tracker_client, store_serv, file_buffer, appended_filename):file_size = len(file_buffer)return self._storage_do_append_file(tracker_client, store_serv, file_buffer, file_size, FDFS_UPLOAD_BY_BUFFER,appended_filename)def _storage_do_truncate_file(self, tracker_client, store_serv, truncated_filesize, appender_filename):store_conn = self.pool.get_connection()th = Tracker_header()th.cmd = STORAGE_PROTO_CMD_TRUNCATE_FILEappender_filename_len = len(appender_filename)th.pkg_len = FDFS_PROTO_PKG_LEN_SIZE * 2 + appender_filename_lentry:th.send_header(store_conn)# truncate_fmt:|-appender_filename_len(8)-truncate_filesize(8)#              -appender_filename(len)-|truncate_fmt = '!Q Q %ds' % appender_filename_lensend_buffer = struct.pack(truncate_fmt, appender_filename_len, truncated_filesize, appender_filename)tcp_send_data(store_conn, send_buffer)th.recv_header(store_conn)if th.status != 0:raise DataError('[-] Error: %d, %s' % (th.status, os.strerror(th.status)))except:raisefinally:self.pool.release(store_conn)ret_dict = {}ret_dict['Status'] = 'Truncate successed.'ret_dict['Storage IP'] = store_serv.ip_addrreturn ret_dictdef storage_truncate_file(self, tracker_client, store_serv, truncated_filesize, appender_filename):return self._storage_do_truncate_file(tracker_client, store_serv, truncated_filesize, appender_filename)def _storage_do_modify_file(self, tracker_client, store_serv, upload_type, filebuffer, offset, filesize,appender_filename):store_conn = self.pool.get_connection()th = Tracker_header()th.cmd = STORAGE_PROTO_CMD_MODIFY_FILEappender_filename_len = len(appender_filename)th.pkg_len = FDFS_PROTO_PKG_LEN_SIZE * 3 + appender_filename_len + filesizetry:th.send_header(store_conn)# modify_fmt: |-filename_len(8)-offset(8)-filesize(8)-filename(len)-|modify_fmt = '!Q Q Q %ds' % appender_filename_lensend_buffer = struct.pack(modify_fmt, appender_filename_len, offset, filesize, appender_filename)tcp_send_data(store_conn, send_buffer)if upload_type == FDFS_UPLOAD_BY_FILENAME:upload_size = tcp_send_file(store_conn, filebuffer)elif upload_type == FDFS_UPLOAD_BY_BUFFER:tcp_send_data(store_conn, filebuffer)elif upload_type == FDFS_UPLOAD_BY_FILE:upload_size = tcp_send_file_ex(store_conn, filebuffer)th.recv_header(store_conn)if th.status != 0:raise DataError('[-] Error: %d, %s' % (th.status, os.strerror(th.status)))except:raisefinally:self.pool.release(store_conn)ret_dict = {}ret_dict['Status'] = 'Modify successed.'ret_dict['Storage IP'] = store_serv.ip_addrreturn ret_dictdef storage_modify_by_filename(self, tracker_client, store_serv, filename, offset, filesize, appender_filename):return self._storage_do_modify_file(tracker_client, store_serv, FDFS_UPLOAD_BY_FILENAME, filename, offset,filesize, appender_filename)def storage_modify_by_file(self, tracker_client, store_serv, filename, offset, filesize, appender_filename):return self._storage_do_modify_file(tracker_client, store_serv, FDFS_UPLOAD_BY_FILE, filename, offset, filesize,appender_filename)def storage_modify_by_buffer(self, tracker_client, store_serv, filebuffer, offset, filesize, appender_filename):return self._storage_do_modify_file(tracker_client, store_serv, FDFS_UPLOAD_BY_BUFFER, filebuffer, offset,filesize, appender_filename)

报错:fdfs while reading from socket: (timed out))相关推荐

  1. 达梦数据库连接报错 error code=-70028 Create SOCKET connection failure. 创建SOCKET连接失败

    达梦数据库连接报错 error code=-70028 Create SOCKET connection failure. 创建SOCKET连接失败 D:\dm8\bin>disql SYSDB ...

  2. 【flink 报错】Heartbeat of TaskManager is timed out

    文章目录 报错Heartbeat of TaskManager is timed out 解决timeout时长设置大 报错Heartbeat of TaskManager is timed out ...

  3. Redis:报错Creating Server TCP listening socket *:6379: bind: No error

    错误: window下启动redis服务报错: Creating Server TCP listening socket *:6379: bind: No error 原因: 端口6379已被绑定,应 ...

  4. OPPO 设备报错 android.content.res.AssetManager.finalize() timed out after 120 seconds

    这是从后台看到的一个错误日志,在一些OPPO 机型会报错 ,如 R9 等. 可以看到出错的设备基本是OPPO 的一些设备,推测应该是极光的SDK在部分OPPO设备导致,在极光论坛搜索该错误能找到很多帖 ...

  5. 【aviator】aviator 报错 EOF while reading string at index

    1.背景 一段代码测试性能报错 @Testpublic void aviatorPatternPerformanceTest21() throws Exception {String rule = & ...

  6. 微信开发工具报错:UNKNOWN ERROR:tunneling socket could not be established,cause=getaddrinfo ENOTFOUND socks

    新建小程序时报错:UNKNOWN ERROR:tunneling socket could not be established,cause=getaddrinfo ENOTFOUND socks 发 ...

  7. 使用idea时maven报错:Error reading file E:/heima/ns/pom.xml

    我在这里添加了东西所有报错,选择删掉就没事了

  8. 【mac】mac 安装 RibbitMQ 报错 Error when reading /Users/lcc/.erlang.cookie: eacces

    文章目录 1.概述 1.概述 在安装 [mac]Mac 安装 RabbitMQ 的时候遇到这个问题.Error when reading /Users/lcc/.erlang.cookie: eacc ...

  9. LSF - 提交GUI应用到LSF无法运行,报错Failed to connect to socket /tmp/dbus-xxxxxxxxx: Connection refused

    问题描述 提交GUI应用到LSF无法运行,如下所示 问题分析 这种GUI应用,不能以交互式的方式提交.提交命令换成bsub firefox即可.

  10. Windows下启动Redis失败,报错creating server tcp listening socket 127.0.0.1:6379: bind No error

    解决方案如下: 进入redis安装目录打开cmd,按顺序输入下面命令 1. redis-cli.exe 2. shutdown 3. exit 4. redis-server.exe redis.wi ...

最新文章

  1. c++ primer 5th 练习11.38自己编写答案(用无序容器重写单词计数程序)
  2. yum错误---Running Transaction
  3. 函数或全局变量重复定义时会怎样?
  4. [团队项目3.0]Scrum团队成立
  5. Scrapy复习总结
  6. vue中 点击事件的写法_vue基础之事件v-onclick=函数用法示例
  7. Hadoop入门(1)
  8. FTP 文件上传跟下载
  9. java中如何表示圆周率
  10. SMART PLC PID算法基本解析(附公式)
  11. BCB中利用剪贴板复制粘贴
  12. ESP32超详细学习记录:获取B站粉丝数
  13. 通过FTP从服务器上下载文件
  14. Basin hopping是什么全局优化算法?
  15. 记录spring编译过程遇到的问题previously initiated loading for a different type with name kotlin/sequences/Seque
  16. 干货|最全焊接不良汇总,你知道如何避免吗?
  17. 一键查询微信加过那些群聊
  18. oracle的经典总结
  19. 前端学习笔记001:HTML5
  20. Docker测试环境笔记

热门文章

  1. 18650锂电池保护板接线图_锂电池保护板几种接线方法介绍
  2. 【测评】思维导图的战争!手绘和数字思维导图哪个更实用?
  3. 2022年 hust OJ 最新搭建方式
  4. 阿里云服务器 发送邮件无法连接smtp的解决方案
  5. 软件测试 | 状态迁移法
  6. php laypage,LayUI分页和LayUI laypage分页区别详解
  7. 苹果手机解压缩软件_照片压缩软件哪款好用?推荐5款好用的图片压缩软件
  8. Linux下使用dos2unix修改目录中文件格式
  9. Java8新特性DateTime使用
  10. psd缩略图上传控件