用法:

python WikiExtractor.py -b 500M -o output_filename input_filename.bz2

参数说明:

  • WikiExtractor.py里面存放Wikipedia Extractor代码;
  • b 1000M表示的是以1000M为单位进行切分,有时候可能语料太大,我们可能需要切分成几个小的文件(默认),如果存入一个文件,只需要设置的大小比处理的包大即可;
  • output_filename:需要将提取的文件存放的路径;
  • input_filename.bz2:需要进行提取的.bz2文件的路径;
#!/usr/bin/env python
# -*- coding: utf-8 -*-# =============================================================================
#  Version: 2.75 (March 4, 2017)
#  Author: Giuseppe Attardi (attardi@di.unipi.it), University of Pisa
#
#  Contributors:
#   Antonio Fuschetto (fuschett@aol.com)
#   Leonardo Souza (lsouza@amtera.com.br)
#   Juan Manuel Caicedo (juan@cavorite.com)
#   Humberto Pereira (begini@gmail.com)
#   Siegfried-A. Gevatter (siegfried@gevatter.com)
#   Pedro Assis (pedroh2306@gmail.com)
#   Wim Muskee (wimmuskee@gmail.com)
#   Radics Geza (radicsge@gmail.com)
#   orangain (orangain@gmail.com)
#   Seth Cleveland (scleveland@turnitin.com)
#   Bren Barn
#
# =============================================================================
#  Copyright (c) 2011-2017. Giuseppe Attardi (attardi@di.unipi.it).
# =============================================================================
#  This file is part of Tanl.
#
#  Tanl is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License, version 3,
#  as published by the Free Software Foundation.
#
#  Tanl is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License at <http://www.gnu.org/licenses/> for more details.
#
# ============================================================================="""Wikipedia Extractor:
Extracts and cleans text from a Wikipedia database dump and stores output in a
number of files of similar size in a given directory.
Each file will contain several documents in the format:<doc id="" revid="" url="" title="">...</doc>If the program is invoked with the --json flag, then each file will
contain several documents formatted as json ojects, one per line, with
the following structure{"id": "", "revid": "", "url":"", "title": "", "text": "..."}Template expansion requires preprocesssng first the whole dump and
collecting template definitions."""from __future__ import unicode_literals, divisionimport sys
import argparse
import bz2
import codecs
import cgi
import fileinput
import logging
import os.path
import re  # TODO use regex when it will be standard
import time
import json
from io import StringIO
from multiprocessing import Queue, Process, Value, cpu_count
from timeit import default_timerPY2 = sys.version_info[0] == 2
# Python 2.7 compatibiity
if PY2:from urllib import quotefrom htmlentitydefs import name2codepointfrom itertools import izip as zip, izip_longest as zip_longestrange = xrange  # Use Python 3 equivalentchr = unichr    # Use Python 3 equivalenttext_type = unicodeclass SimpleNamespace(object):def __init__ (self, **kwargs):self.__dict__.update(kwargs)def __repr__ (self):keys = sorted(self.__dict__)items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys)return "{}({})".format(type(self).__name__, ", ".join(items))def __eq__ (self, other):return self.__dict__ == other.__dict__
else:from urllib.parse import quotefrom html.entities import name2codepointfrom itertools import zip_longestfrom types import SimpleNamespacetext_type = str# ===========================================================================# Program version
version = '2.75'## PARAMS ####################################################################options = SimpleNamespace(### Defined in <siteinfo># We include as default Template, when loading external template file.knownNamespaces = {'Template': 10},### The namespace used for template definitions# It is the name associated with namespace key=10 in the siteinfo header.templateNamespace = '',templatePrefix = '',### The namespace used for module definitions# It is the name associated with namespace key=828 in the siteinfo header.moduleNamespace = '',### Recognize only these namespaces in links# w: Internal links to the Wikipedia# wiktionary: Wiki dictionary# wikt: shortcut for Wiktionary#acceptedNamespaces = ['w', 'wiktionary', 'wikt'],# This is obtained from <siteinfo>urlbase = '',### Filter disambiguation pagesfilter_disambig_pages = False,### Drop tables from the articlekeep_tables = False,### Whether to preserve links in outputkeepLinks = False,### Whether to preserve section titleskeepSections = True,### Whether to preserve listskeepLists = False,### Whether to output HTML instead of texttoHTML = False,### Whether to write json instead of the xml-like default output formatwrite_json = False,### Whether to expand templatesexpand_templates = True,#### Whether to escape doc contentescape_doc = False,### Print the wikipedia article revisionprint_revision = False,### Minimum expanded text length required to print documentmin_text_length = 0,# Shared objects holding templates, redirects and cachetemplates = {},redirects = {},# cache of parser templates# FIXME: sharing this with a Manager slows down.templateCache = {},# Elements to ignore/discardignored_tag_patterns = [],filter_category_include = set(),filter_category_exclude = set(),log_file = None,discardElements = ['gallery', 'timeline', 'noinclude', 'pre','table', 'tr', 'td', 'th', 'caption', 'div','form', 'input', 'select', 'option', 'textarea','ul', 'li', 'ol', 'dl', 'dt', 'dd', 'menu', 'dir','ref', 'references', 'img', 'imagemap', 'source', 'small','sub', 'sup', 'indicator'],
)##
# Keys for Template and Module namespaces
templateKeys = set(['10', '828'])##
# Regex for identifying disambig pages
filter_disambig_page_pattern = re.compile("{{disambig(uation)?(\|[^}]*)?}}|__DISAMBIG__")##
g_page_total = 0
g_page_articl_total=0
g_page_articl_used_total=0
# page filtering logic -- remove templates, undesired xml namespaces, and disambiguation pages
def keepPage(ns, catSet, page):global g_page_articl_total,g_page_total,g_page_articl_used_totalg_page_total += 1if ns != '0':               # Aritclereturn False# remove disambig pages if desiredg_page_articl_total += 1if options.filter_disambig_pages:for line in page:if filter_disambig_page_pattern.match(line):return Falseif len(options.filter_category_include) > 0 and len(options.filter_category_include & catSet)==0:logging.debug("***No include  " + str(catSet))return Falseif len(options.filter_category_exclude) > 0 and len(options.filter_category_exclude & catSet)>0:logging.debug("***Exclude  " + str(catSet))return Falseg_page_articl_used_total += 1return Truedef get_url(uid):return "%s?curid=%s" % (options.urlbase, uid)# =========================================================================
#
# MediaWiki Markup Grammar
# https://www.mediawiki.org/wiki/Preprocessor_ABNF# xml-char = %x9 / %xA / %xD / %x20-D7FF / %xE000-FFFD / %x10000-10FFFF
# sptab = SP / HTAB# ; everything except ">" (%x3E)
# attr-char = %x9 / %xA / %xD / %x20-3D / %x3F-D7FF / %xE000-FFFD / %x10000-10FFFF# literal         = *xml-char
# title           = wikitext-L3
# part-name       = wikitext-L3
# part-value      = wikitext-L3
# part            = ( part-name "=" part-value ) / ( part-value )
# parts           = [ title *( "|" part ) ]
# tplarg          = "{{{" parts "}}}"
# template        = "{{" parts "}}"
# link            = "[[" wikitext-L3 "]]"# comment         = "<!--" literal "-->"
# unclosed-comment = "<!--" literal END
# ; the + in the line-eating-comment rule was absent between MW 1.12 and MW 1.22
# line-eating-comment = LF LINE-START *SP +( comment *SP ) LINE-END# attr            = *attr-char
# nowiki-element  = "<nowiki" attr ( "/>" / ( ">" literal ( "</nowiki>" / END ) ) )# wikitext-L2     = heading / wikitext-L3 / *wikitext-L2
# wikitext-L3     = literal / template / tplarg / link / comment /
#                   line-eating-comment / unclosed-comment / xmlish-element /
#                   *wikitext-L3# ------------------------------------------------------------------------------selfClosingTags = ('br', 'hr', 'nobr', 'ref', 'references', 'nowiki')placeholder_tags = {'math': 'formula', 'code': 'codice'}def normalizeTitle(title):"""Normalize title"""# remove leading/trailing whitespace and underscorestitle = title.strip(' _')# replace sequences of whitespace and underscore chars with a single spacetitle = re.sub(r'[\s_]+', ' ', title)m = re.match(r'([^:]*):(\s*)(\S(?:.*))', title)if m:prefix = m.group(1)if m.group(2):optionalWhitespace = ' 'else:optionalWhitespace = ''rest = m.group(3)ns = normalizeNamespace(prefix)if ns in options.knownNamespaces:# If the prefix designates a known namespace, then it might be# followed by optional whitespace that should be removed to get# the canonical page name# (e.g., "Category:  Births" should become "Category:Births").title = ns + ":" + ucfirst(rest)else:# No namespace, just capitalize first letter.# If the part before the colon is not a known namespace, then we# must not remove the space after the colon (if any), e.g.,# "3001: The_Final_Odyssey" != "3001:The_Final_Odyssey".# However, to get the canonical page name we must contract multiple# spaces into one, because# "3001:   The_Final_Odyssey" != "3001: The_Final_Odyssey".title = ucfirst(prefix) + ":" + optionalWhitespace + ucfirst(rest)else:# no namespace, just capitalize first lettertitle = ucfirst(title)return titledef unescape(text):"""Removes HTML or XML character references and entities from a text string.:param text The HTML (or XML) source text.:return The plain text, as a Unicode string, if necessary."""def fixup(m):text = m.group(0)code = m.group(1)try:if text[1] == "#":  # character referenceif text[2] == "x":return chr(int(code[1:], 16))else:return chr(int(code))else:  # named entityreturn chr(name2codepoint[code])except:return text  # leave as isreturn re.sub("&#?(\w+);", fixup, text)# Match HTML comments
# The buggy template {{Template:T}} has a comment terminating with just "->"
comment = re.compile(r'<!--.*?-->', re.DOTALL)# Match <nowiki>...</nowiki>
nowiki = re.compile(r'<nowiki>.*?</nowiki>')def ignoreTag(tag):left = re.compile(r'<%s\b.*?>' % tag, re.IGNORECASE | re.DOTALL)  # both <ref> and <reference>right = re.compile(r'</\s*%s>' % tag, re.IGNORECASE)options.ignored_tag_patterns.append((left, right))# Match selfClosing HTML tags
selfClosing_tag_patterns = [re.compile(r'<\s*%s\b[^>]*/\s*>' % tag, re.DOTALL | re.IGNORECASE) for tag in selfClosingTags]# Match HTML placeholder tags
placeholder_tag_patterns = [(re.compile(r'<\s*%s(\s*| [^>]+?)>.*?<\s*/\s*%s\s*>' % (tag, tag), re.DOTALL | re.IGNORECASE),repl) for tag, repl in placeholder_tags.items()]# Match preformatted lines
preformatted = re.compile(r'^ .*?$')# Match external links (space separates second optional parameter)
externalLink = re.compile(r'\[\w+[^ ]*? (.*?)]')
externalLinkNoAnchor = re.compile(r'\[\w+[&\]]*\]')# Matches bold/italic
bold_italic = re.compile(r"'''''(.*?)'''''")
bold = re.compile(r"'''(.*?)'''")
italic_quote = re.compile(r"''\"([^\"]*?)\"''")
italic = re.compile(r"''(.*?)''")
quote_quote = re.compile(r'""([^"]*?)""')# Matches space
spaces = re.compile(r' {2,}')# Matches dots
dots = re.compile(r'\.{4,}')# ======================================================================class Template(list):"""A Template is a list of TemplateText or TemplateArgs"""@classmethoddef parse(cls, body):tpl = Template()# we must handle nesting, s.a.# {{{1|{{PAGENAME}}}# {{{italics|{{{italic|}}}# {{#if:{{{{{#if:{{{nominee|}}}|nominee|candidate}}|}}}|#start = 0for s, e in findMatchingBraces(body, 3):tpl.append(TemplateText(body[start:s]))tpl.append(TemplateArg(body[s + 3:e - 3]))start = etpl.append(TemplateText(body[start:]))  # leftoverreturn tpldef subst(self, params, extractor, depth=0):# We perform parameter substitutions recursively.# We also limit the maximum number of iterations to avoid too long or# even endless loops (in case of malformed input).# :see: http://meta.wikimedia.org/wiki/Help:Expansion#Distinction_between_variables.2C_parser_functions.2C_and_templates## Parameter values are assigned to parameters in two (?) passes.# Therefore a parameter name in a template can depend on the value of# another parameter of the same template, regardless of the order in# which they are specified in the template call, for example, using# Template:ppp containing "{{{{{{p}}}}}}", {{ppp|p=q|q=r}} and even# {{ppp|q=r|p=q}} gives r, but using Template:tvvv containing# "{{{{{{{{{p}}}}}}}}}", {{tvvv|p=q|q=r|r=s}} gives s.# logging.debug('&*ssubst tpl %d %s', extractor.frame.length, '', depth, self)if depth > extractor.maxParameterRecursionLevels:extractor.recursion_exceeded_3_errs += 1return ''return ''.join([tpl.subst(params, extractor, depth) for tpl in self])def __str__(self):return ''.join([text_type(x) for x in self])class TemplateText(text_type):"""Fixed text of template"""def subst(self, params, extractor, depth):return selfclass TemplateArg(object):"""parameter to a template.Has a name and a default value, both of which are Templates."""def __init__(self, parameter):""":param parameter: the parts of a tplarg."""# the parameter name itself might contain templates, e.g.:#   appointe{{#if:{{{appointer14|}}}|r|d}}14|#   4|{{{{{subst|}}}CURRENTYEAR}}# any parts in a tplarg after the first (the parameter default) are# ignored, and an equals sign in the first part is treated as plain text.# logging.debug('TemplateArg %s', parameter)parts = splitParts(parameter)self.name = Template.parse(parts[0])if len(parts) > 1:# This parameter has a default valueself.default = Template.parse(parts[1])else:self.default = Nonedef __str__(self):if self.default:return '{{{%s|%s}}}' % (self.name, self.default)else:return '{{{%s}}}' % self.namedef subst(self, params, extractor, depth):"""Substitute value for this argument from dict :param params:Use :param extractor: to evaluate expressions for name and default.Limit substitution to the maximun :param depth:."""# the parameter name itself might contain templates, e.g.:# appointe{{#if:{{{appointer14|}}}|r|d}}14|paramName = self.name.subst(params, extractor, depth + 1)paramName = extractor.transform(paramName)res = ''if paramName in params:res = params[paramName]  # use parameter value specified in template invocationelif self.default:  # use the default valuedefaultValue = self.default.subst(params, extractor, depth + 1)res = extractor.transform(defaultValue)# logging.debug('subst arg %d %s -> %s' % (depth, paramName, res))return resclass Frame(object):def __init__(self, title='', args=[], prev=None):self.title = titleself.args = argsself.prev = prevself.depth = prev.depth + 1 if prev else 0def push(self, title, args):return Frame(title, args, self)def pop(self):return self.prevdef __str__(self):res = ''prev = self.prevwhile prev:if res: res += ', 'res += '(%s, %s)' % (prev.title, prev.args)prev = prev.prevreturn '<Frame [' + res + ']>'# ======================================================================substWords = 'subst:|safesubst:'class Extractor(object):"""An extraction task on a article."""def __init__(self, id, revid, title, lines):""":param id: id of page.:param title: tutle of page.:param lines: a list of lines."""self.id = idself.revid = revidself.title = titleself.text = ''.join(lines)self.magicWords = MagicWords()self.frame = Frame()self.recursion_exceeded_1_errs = 0  # template recursion within expand()self.recursion_exceeded_2_errs = 0  # template recursion within expandTemplate()self.recursion_exceeded_3_errs = 0  # parameter recursionself.template_title_errs = 0def write_output(self, out, text):""":param out: a memory file:param text: the text of the page"""url = get_url(self.id)if options.write_json:json_data = {'id': self.id,'url': url,'title': self.title,'text': "\n".join(text)}if options.print_revision:json_data['revid'] = self.revid# We don't use json.dump(data, out) because we want to be# able to encode the string if the output is sys.stdoutout_str = json.dumps(json_data, ensure_ascii=False)if out == sys.stdout:   # option -a or -o -out_str = out_str.encode('utf-8')out.write(out_str)out.write('\n')else:if options.print_revision:header = '<doc id="%s" revid="%s" url="%s" title="%s">\n' % (self.id, self.revid, url, self.title)else:header = '<doc id="%s" url="%s" title="%s">\n' % (self.id, url, self.title)footer = "\n</doc>\n"if out == sys.stdout:   # option -a or -o -header = header.encode('utf-8')out.write(header)for line in text:if out == sys.stdout:   # option -a or -o -line = line.encode('utf-8')out.write(line)out.write('\n')out.write(footer)def extract(self, out):""":param out: a memory file."""logging.info('%s\t%s', self.id, self.title)# Separate header from text with a newline.if options.toHTML:title_str = '<h1>' + self.title + '</h1>'else:title_str = self.title + '\n'# https://www.mediawiki.org/wiki/Help:Magic_wordscolon = self.title.find(':')if colon != -1:ns = self.title[:colon]pagename = self.title[colon+1:]else:ns = '' # Mainpagename = self.titleself.magicWords['NAMESPACE'] = nsself.magicWords['NAMESPACENUMBER'] = options.knownNamespaces.get(ns, '0')self.magicWords['PAGENAME'] = pagenameself.magicWords['FULLPAGENAME'] = self.titleslash = pagename.rfind('/')if slash != -1:self.magicWords['BASEPAGENAME'] = pagename[:slash]self.magicWords['SUBPAGENAME'] = pagename[slash+1:]else:self.magicWords['BASEPAGENAME'] = pagenameself.magicWords['SUBPAGENAME'] = ''slash = pagename.find('/')if slash != -1:self.magicWords['ROOTPAGENAME'] = pagename[:slash]else:self.magicWords['ROOTPAGENAME'] = pagenameself.magicWords['CURRENTYEAR'] = time.strftime('%Y')self.magicWords['CURRENTMONTH'] = time.strftime('%m')self.magicWords['CURRENTDAY'] = time.strftime('%d')self.magicWords['CURRENTHOUR'] = time.strftime('%H')self.magicWords['CURRENTTIME'] = time.strftime('%H:%M:%S')text = self.textself.text = ''          # save memory## @see https://doc.wikimedia.org/mediawiki-core/master/php/classParser.html# This does the equivalent of internalParse():## $dom = $this->preprocessToDom( $text, $flag );# $text = $frame->expand( $dom );#text = self.transform(text)text = self.wiki2text(text)text = compact(self.clean(text))# from zwChantext = [title_str] + textif sum(len(line) for line in text) < options.min_text_length:returnself.write_output(out, text)errs = (self.template_title_errs,self.recursion_exceeded_1_errs,self.recursion_exceeded_2_errs,self.recursion_exceeded_3_errs)if any(errs):logging.warn("Template errors in article '%s' (%s): title(%d) recursion(%d, %d, %d)",self.title, self.id, *errs)def transform(self, wikitext):"""Transforms wiki markup.@see https://www.mediawiki.org/wiki/Help:Formatting"""# look for matching <nowiki>...</nowiki>res = ''cur = 0for m in nowiki.finditer(wikitext, cur):res += self.transform1(wikitext[cur:m.start()]) + wikitext[m.start():m.end()]cur = m.end()# leftoverres += self.transform1(wikitext[cur:])return resdef transform1(self, text):"""Transform text not containing <nowiki>"""if options.expand_templates:# expand templates# See: http://www.mediawiki.org/wiki/Help:Templatesreturn self.expand(text)else:# Drop transclusions (template, parser functions)return dropNested(text, r'{{', r'}}')def wiki2text(self, text):## final part of internalParse().)## $text = $this->doTableStuff( $text );# $text = preg_replace( '/(^|\n)-----*/', '\\1<hr />', $text );# $text = $this->doDoubleUnderscore( $text );# $text = $this->doHeadings( $text );# $text = $this->replaceInternalLinks( $text );# $text = $this->doAllQuotes( $text );# $text = $this->replaceExternalLinks( $text );# $text = str_replace( self::MARKER_PREFIX . 'NOPARSE', '', $text );# $text = $this->doMagicLinks( $text );# $text = $this->formatHeadings( $text, $origText, $isMain );# Drop tables# first drop residual templates, or else empty parameter |} might look like end of table.if not options.keep_tables:text = dropNested(text, r'{{', r'}}')text = dropNested(text, r'{\|', r'\|}')# Handle bold/italic/quoteif options.toHTML:text = bold_italic.sub(r'<b>\1</b>', text)text = bold.sub(r'<b>\1</b>', text)text = italic.sub(r'<i>\1</i>', text)else:text = bold_italic.sub(r'\1', text)text = bold.sub(r'\1', text)text = italic_quote.sub(r'"\1"', text)text = italic.sub(r'"\1"', text)text = quote_quote.sub(r'"\1"', text)# residuals of unbalanced quotestext = text.replace("'''", '').replace("''", '"')# replace internal linkstext = replaceInternalLinks(text)# replace external linkstext = replaceExternalLinks(text)# drop MagicWords behavioral switchestext = magicWordsRE.sub('', text)# ############### Process HTML ################ turn into HTML, except for the content of <syntaxhighlight>res = ''cur = 0for m in syntaxhighlight.finditer(text):res += unescape(text[cur:m.start()]) + m.group(1)cur = m.end()text = res + unescape(text[cur:])return textdef clean(self, text):"""Removes irrelevant parts from :param: text."""# Collect spansspans = []# Drop HTML commentsfor m in comment.finditer(text):spans.append((m.start(), m.end()))# Drop self-closing tagsfor pattern in selfClosing_tag_patterns:for m in pattern.finditer(text):spans.append((m.start(), m.end()))# Drop ignored tagsfor left, right in options.ignored_tag_patterns:for m in left.finditer(text):spans.append((m.start(), m.end()))for m in right.finditer(text):spans.append((m.start(), m.end()))# Bulk remove all spanstext = dropSpans(spans, text)# Drop discarded elementsfor tag in options.discardElements:text = dropNested(text, r'<\s*%s\b[^>/]*>' % tag, r'<\s*/\s*%s>' % tag)if not options.toHTML:# Turn into text what is left (&amp;nbsp;) and <syntaxhighlight>text = unescape(text)# Expand placeholdersfor pattern, placeholder in placeholder_tag_patterns:index = 1for match in pattern.finditer(text):text = text.replace(match.group(), '%s_%d' % (placeholder, index))index += 1text = text.replace('<<', '«').replace('>>', '»')############################################## Cleanup texttext = text.replace('\t', ' ')text = spaces.sub(' ', text)text = dots.sub('...', text)text = re.sub(' (,:\.\)\]»)', r'\1', text)text = re.sub('(\[\(«) ', r'\1', text)text = re.sub(r'\n\W+?\n', '\n', text, flags=re.U)  # lines with only punctuationstext = text.replace(',,', ',').replace(',.', '.')if options.keep_tables:# the following regular expressions are used to remove the wikiml chartacters around table strucutures# yet keep the content. The order here is imporant so we remove certain markup like {| and then# then the future html attributes such as 'style'. Finally we drop the remaining '|-' that delimits cells.text = re.sub(r'!(?:\s)?style=\"[a-z]+:(?:\d+)%;\"', r'', text)text = re.sub(r'!(?:\s)?style="[a-z]+:(?:\d+)%;[a-z]+:(?:#)?(?:[0-9a-z]+)?"', r'', text)text = text.replace('|-', '')text = text.replace('|', '')if options.toHTML:text = html.escape(text)return text# ----------------------------------------------------------------------# Expand templatesmaxTemplateRecursionLevels = 30maxParameterRecursionLevels = 10# check for template beginningreOpen = re.compile('(?<!{){{(?!{)', re.DOTALL)def expand(self, wikitext):""":param wikitext: the text to be expanded.Templates are frequently nested. Occasionally, parsing mistakes maycause template insertion to enter an infinite loop, for instance whentrying to instantiate Template:Country{{country_{{{1}}}|{{{2}}}|{{{2}}}|size={{{size|}}}|name={{{name|}}}}}which is repeatedly trying to insert template 'country_', which isagain resolved to Template:Country. The straightforward solution ofkeeping track of templates that were already inserted for the currentarticle would not work, because the same template may legally be usedmore than once, with different parameters in different parts of thearticle.  Therefore, we limit the number of iterations of nestedtemplate inclusion."""# Test template expansion at:# https://en.wikipedia.org/wiki/Special:ExpandTemplates# https://it.wikipedia.org/wiki/Speciale:EspandiTemplateres = ''if self.frame.depth >= self.maxTemplateRecursionLevels:self.recursion_exceeded_1_errs += 1return res# logging.debug('%*s<expand', self.frame.depth, '')cur = 0# look for matching {{...}}for s, e in findMatchingBraces(wikitext, 2):res += wikitext[cur:s] + self.expandTemplate(wikitext[s + 2:e - 2])cur = e# leftoverres += wikitext[cur:]# logging.debug('%*sexpand> %s', self.frame.depth, '', res)return resdef templateParams(self, parameters):"""Build a dictionary with positional or name key to expanded parameters.:param parameters: the parts[1:] of a template, i.e. all except the title."""templateParams = {}if not parameters:return templateParams# logging.debug('%*s<templateParams: %s', self.frame.length, '', '|'.join(parameters))# Parameters can be either named or unnamed. In the latter case, their# name is defined by their ordinal position (1, 2, 3, ...).unnamedParameterCounter = 0# It's legal for unnamed parameters to be skipped, in which case they# will get default values (if available) during actual instantiation.# That is {{template_name|a||c}} means parameter 1 gets# the value 'a', parameter 2 value is not defined, and parameter 3 gets# the value 'c'.  This case is correctly handled by function 'split',# and does not require any special handling.for param in parameters:# Spaces before or after a parameter value are normally ignored,# UNLESS the parameter contains a link (to prevent possible gluing# the link to the following text after template substitution)# Parameter values may contain "=" symbols, hence the parameter# name extends up to the first such symbol.# It is legal for a parameter to be specified several times, in# which case the last assignment takes precedence. Example:# "{{t|a|b|c|2=B}}" is equivalent to "{{t|a|B|c}}".# Therefore, we don't check if the parameter has been assigned a# value before, because anyway the last assignment should override# any previous ones.# FIXME: Don't use DOTALL here since parameters may be tags with# attributes, e.g. <div class="templatequotecite"># Parameters may span several lines, like:# {{Reflist|colwidth=30em|refs=# &lt;ref name=&quot;Goode&quot;&gt;Title&lt;/ref&gt;# The '=' might occurr within an HTML attribute:#   "&lt;ref name=value"# but we stop at first.m = re.match(' *([^=]*?) *?=(.*)', param, re.DOTALL)if m:# This is a named parameter.  This case also handles parameter# assignments like "2=xxx", where the number of an unnamed# parameter ("2") is specified explicitly - this is handled# transparently.parameterName = m.group(1).strip()parameterValue = m.group(2)if ']]' not in parameterValue:  # if the value does not contain a link, trim whitespaceparameterValue = parameterValue.strip()templateParams[parameterName] = parameterValueelse:# this is an unnamed parameterunnamedParameterCounter += 1if ']]' not in param:  # if the value does not contain a link, trim whitespaceparam = param.strip()templateParams[str(unnamedParameterCounter)] = param# logging.debug('%*stemplateParams> %s', self.frame.length, '', '|'.join(templateParams.values()))return templateParamsdef expandTemplate(self, body):"""Expands template invocation.:param body: the parts of a template.:see http://meta.wikimedia.org/wiki/Help:Expansion for an explanationof the process.See in particular: Expansion of names and valueshttp://meta.wikimedia.org/wiki/Help:Expansion#Expansion_of_names_and_valuesFor most parser functions all names and values are expanded,regardless of what is relevant for the result. The branching functions(#if, #ifeq, #iferror, #ifexist, #ifexpr, #switch) are exceptions.All names in a template call are expanded, and the titles of thetplargs in the template body, after which it is determined whichvalues must be expanded, and for which tplargs in the template bodythe first part (default) [sic in the original doc page].In the case of a tplarg, any parts beyond the first are neverexpanded.  The possible name and the value of the first part isexpanded if the title does not match a name in the template call.:see code for braceSubstitution athttps://doc.wikimedia.org/mediawiki-core/master/php/html/Parser_8php_source.html#3397:"""# template        = "{{" parts "}}"# Templates and tplargs are decomposed in the same way, with pipes as# separator, even though eventually any parts in a tplarg after the first# (the parameter default) are ignored, and an equals sign in the first# part is treated as plain text.# Pipes inside inner templates and tplargs, or inside double rectangular# brackets within the template or tplargs are not taken into account in# this decomposition.# The first part is called title, the other parts are simply called parts.# If a part has one or more equals signs in it, the first equals sign# determines the division into name = value. Equals signs inside inner# templates and tplargs, or inside double rectangular brackets within the# part are not taken into account in this decomposition. Parts without# equals sign are indexed 1, 2, .., given as attribute in the <name> tag.if self.frame.depth >= self.maxTemplateRecursionLevels:self.recursion_exceeded_2_errs += 1# logging.debug('%*sEXPAND> %s', self.frame.depth, '', body)return ''logging.debug('%*sEXPAND %s', self.frame.depth, '', body)parts = splitParts(body)# title is the portion before the first |title = parts[0].strip()title = self.expand(title)# SUBST# Apply the template tag to parameters without# substituting into them, e.g.# {{subst:t|a{{{p|q}}}b}} gives the wikitext start-a{{{p|q}}}b-end# @see https://www.mediawiki.org/wiki/Manual:Substitution#Partial_substitutionsubst = Falseif re.match(substWords, title, re.IGNORECASE):title = re.sub(substWords, '', title, 1, re.IGNORECASE)subst = Trueif title in self.magicWords.values:ret = self.magicWords[title]logging.debug('%*s<EXPAND %s %s', self.frame.depth, '', title, ret)return ret# Parser functions.# For most parser functions all names and values are expanded,# regardless of what is relevant for the result. The branching# functions (#if, #ifeq, #iferror, #ifexist, #ifexpr, #switch) are# exceptions: for #if, #iferror, #ifexist, #ifexp, only the part that# is applicable is expanded; for #ifeq the first and the applicable# part are expanded; for #switch, expanded are the names up to and# including the match (or all if there is no match), and the value in# the case of a match or if there is no match, the default, if any.# The first argument is everything after the first colon.# It has been evaluated above.colon = title.find(':')if colon > 1:funct = title[:colon]parts[0] = title[colon + 1:].strip()  # side-effect (parts[0] not used later)# arguments after first are not evaluatedret = callParserFunction(funct, parts, self)logging.debug('%*s<EXPAND %s %s', self.frame.depth, '', funct, ret)return rettitle = fullyQualifiedTemplateTitle(title)if not title:self.template_title_errs += 1return ''redirected = options.redirects.get(title)if redirected:title = redirected# get the templateif title in options.templateCache:template = options.templateCache[title]elif title in options.templates:template = Template.parse(options.templates[title])# add it to cacheoptions.templateCache[title] = templatedel options.templates[title]else:# The page being included could not be identifiedlogging.debug('%*s<EXPAND %s %s', self.frame.depth, '', title, '')return ''logging.debug('%*sTEMPLATE %s: %s', self.frame.depth, '', title, template)# tplarg          = "{{{" parts "}}}"# parts           = [ title *( "|" part ) ]# part            = ( part-name "=" part-value ) / ( part-value )# part-name       = wikitext-L3# part-value      = wikitext-L3# wikitext-L3     = literal / template / tplarg / link / comment /#                   line-eating-comment / unclosed-comment /#                  xmlish-element / *wikitext-L3# A tplarg may contain other parameters as well as templates, e.g.:#   {{{text|{{{quote|{{{1|{{error|Error: No text given}}}}}}}}}}}# hence no simple RE like this would work:#   '{{{((?:(?!{{{).)*?)}}}'# We must use full CF parsing.# the parameter name itself might be computed, e.g.:#   {{{appointe{{#if:{{{appointer14|}}}|r|d}}14|}}}# Because of the multiple uses of double-brace and triple-brace# syntax, expressions can sometimes be ambiguous.# Precedence rules specifed here:# http://www.mediawiki.org/wiki/Preprocessor_ABNF#Ideal_precedence# resolve ambiguities like this:#   {{{{ }}}} -> { {{{ }}} }#   {{{{{ }}}}} -> {{ {{{ }}} }}## :see: https://en.wikipedia.org/wiki/Help:Template#Handling_parametersparams = parts[1:]# Order of evaluation.# Template parameters are fully evaluated before they are passed to the template.# :see: https://www.mediawiki.org/wiki/Help:Templates#Order_of_evaluationif not subst:# Evaluate parameters, since they may contain templates, including# the symbol "=".# {{#ifexpr: {{{1}}} = 1 }}params = [self.transform(p) for p in params]# build a dict of name-values for the parameter valuesparams = self.templateParams(params)# Perform parameter substitution.# Extend frame before subst, since there may be recursion in default# parameter value, e.g. {{OTRS|celebrative|date=April 2015}} in article# 21637542 in enwiki.self.frame = self.frame.push(title, params)instantiated = template.subst(params, self)value = self.transform(instantiated)self.frame = self.frame.pop()logging.debug('%*s<EXPAND %s %s', self.frame.depth, '', title, value)return value# ----------------------------------------------------------------------
# parameter handlingdef splitParts(paramsList):""":param paramsList: the parts of a template or tplarg.Split template parameters at the separator "|".separator "=".Template parameters often contain URLs, internal links, text or eventemplate expressions, since we evaluate templates outside in.This is required for cases like:{{#if: {{{1}}} | {{lc:{{{1}}} | "parameter missing"}}Parameters are separated by "|" symbols. However, wecannot simply split the string on "|" symbols, since thesealso appear inside templates and internal links, e.g.{{if:||{{#if:the president||{{#if:|[[Category:Hatnote templates|A{{PAGENAME}}]]}}}}}}We split parts at the "|" symbols that are not inside any pair{{{...}}}, {{...}}, [[...]], {|...|}."""# Must consider '[' as normal in expansion of Template:EMedicine2:# #ifeq: ped|article|[http://emedicine.medscape.com/article/180-overview|[http://www.emedicine.com/ped/topic180.htm#{{#if: |section~}}# as part of:# {{#ifeq: ped|article|[http://emedicine.medscape.com/article/180-overview|[http://www.emedicine.com/ped/topic180.htm#{{#if: |section~}}}} ped/180{{#if: |~}}]# should handle both tpl arg like:#    4|{{{{{subst|}}}CURRENTYEAR}}# and tpl parameters like:#    ||[[Category:People|{{#if:A|A|{{PAGENAME}}}}]]sep = '|'parameters = []cur = 0for s, e in findMatchingBraces(paramsList):par = paramsList[cur:s].split(sep)if par:if parameters:# portion before | belongs to previous parameterparameters[-1] += par[0]if len(par) > 1:# rest are new parametersparameters.extend(par[1:])else:parameters = parelif not parameters:parameters = ['']  # create first param# add span to last previous parameterparameters[-1] += paramsList[s:e]cur = e# leftoverpar = paramsList[cur:].split(sep)if par:if parameters:# portion before | belongs to previous parameterparameters[-1] += par[0]if len(par) > 1:# rest are new parametersparameters.extend(par[1:])else:parameters = par# logging.debug('splitParts %s %s\nparams: %s', sep, paramsList, text_type(parameters))return parametersdef findMatchingBraces(text, ldelim=0):""":param ldelim: number of braces to match. 0 means match [[]], {{}} and {{{}}}."""# Parsing is done with respect to pairs of double braces {{..}} delimiting# a template, and pairs of triple braces {{{..}}} delimiting a tplarg.# If double opening braces are followed by triple closing braces or# conversely, this is taken as delimiting a template, with one left-over# brace outside it, taken as plain text. For any pattern of braces this# defines a set of templates and tplargs such that any two are either# separate or nested (not overlapping).# Unmatched double rectangular closing brackets can be in a template or# tplarg, but unmatched double rectangular opening brackets cannot.# Unmatched double or triple closing braces inside a pair of# double rectangular brackets are treated as plain text.# Other formulation: in ambiguity between template or tplarg on one hand,# and a link on the other hand, the structure with the rightmost opening# takes precedence, even if this is the opening of a link without any# closing, so not producing an actual link.# In the case of more than three opening braces the last three are assumed# to belong to a tplarg, unless there is no matching triple of closing# braces, in which case the last two opening braces are are assumed to# belong to a template.# We must skip individual { like in:#   {{#ifeq: {{padleft:|1|}} | { | | &nbsp;}}# We must resolve ambiguities like this:#   {{{{ }}}} -> { {{{ }}} }#   {{{{{ }}}}} -> {{ {{{ }}} }}#   {{#if:{{{{{#if:{{{nominee|}}}|nominee|candidate}}|}}}|...}}#   {{{!}} {{!}}}# Handle:#   {{{{{|safesubst:}}}#Invoke:String|replace|{{{1|{{{{{|safesubst:}}}PAGENAME}}}}}|%s+%([^%(]-%)$||plain=false}}# as well as expressions with stray }:#   {{{link|{{ucfirst:{{{1}}}}}} interchange}}}if ldelim:  # 2-3reOpen = re.compile('[{]{%d,}' % ldelim)  # at least ldelimreNext = re.compile('[{]{2,}|}{2,}')  # at least 2else:reOpen = re.compile('{{2,}|\[{2,}')reNext = re.compile('{{2,}|}{2,}|\[{2,}|]{2,}')  # at least 2cur = 0while True:m1 = reOpen.search(text, cur)if not m1:returnlmatch = m1.end() - m1.start()if m1.group()[0] == '{':stack = [lmatch]  # stack of opening braces lengthselse:stack = [-lmatch]  # negative means [end = m1.end()while True:m2 = reNext.search(text, end)if not m2:return  # unbalancedend = m2.end()brac = m2.group()[0]lmatch = m2.end() - m2.start()if brac == '{':stack.append(lmatch)elif brac == '}':while stack:openCount = stack.pop()  # opening spanif openCount == 0:  # illegal unmatched [[continueif lmatch >= openCount:lmatch -= openCountif lmatch <= 1:  # either close or stray }breakelse:# put back unmatchedstack.append(openCount - lmatch)breakif not stack:yield m1.start(), end - lmatchcur = endbreakelif len(stack) == 1 and 0 < stack[0] < ldelim:# ambiguous {{{{{ }}} }}#yield m1.start() + stack[0], endcur = endbreakelif brac == '[':  # [[stack.append(-lmatch)else:  # ]]while stack and stack[-1] < 0:  # matching [[openCount = -stack.pop()if lmatch >= openCount:lmatch -= openCountif lmatch <= 1:  # either close or stray ]breakelse:# put back unmatched (negative)stack.append(lmatch - openCount)breakif not stack:yield m1.start(), end - lmatchcur = endbreak# unmatched ]] are discardedcur = enddef findBalanced(text, openDelim=['[['], closeDelim=[']]']):"""Assuming that text contains a properly balanced expression using:param openDelim: as opening delimiters and:param closeDelim: as closing delimiters.:return: an iterator producing pairs (start, end) of start and endpositions in text containing a balanced expression."""openPat = '|'.join([re.escape(x) for x in openDelim])# pattern for delimiters expected after each opening delimiterafterPat = {o: re.compile(openPat + '|' + c, re.DOTALL) for o, c in zip(openDelim, closeDelim)}stack = []start = 0cur = 0# end = len(text)startSet = FalsestartPat = re.compile(openPat)nextPat = startPatwhile True:next = nextPat.search(text, cur)if not next:returnif not startSet:start = next.start()startSet = Truedelim = next.group(0)if delim in openDelim:stack.append(delim)nextPat = afterPat[delim]else:opening = stack.pop()# assert opening == openDelim[closeDelim.index(next.group(0))]if stack:nextPat = afterPat[stack[-1]]else:yield start, next.end()nextPat = startPatstart = next.end()startSet = Falsecur = next.end()# ----------------------------------------------------------------------
# Modules# Only minimal support
# FIXME: import Lua modules.def if_empty(*rest):"""This implements If_empty from English Wikipedia module:<title>Module:If empty</title><ns>828</ns><text>local p = {}function p.main(frame)local args = require('Module:Arguments').getArgs(frame, {wrappers = 'Template:If empty', removeBlanks = false})-- For backwards compatibility reasons, the first 8 parameters can be unset instead of being blank,-- even though there's really no legitimate use case for this. At some point, this will be removed.local lowestNil = math.hugefor i = 8,1,-1 doif args[i] == nil thenargs[i] = ''lowestNil = iendendfor k,v in ipairs(args) doif v ~= '' thenif lowestNil &lt; k then-- If any uses of this template depend on the behavior above, add them to a tracking category.-- This is a rather fragile, convoluted, hacky way to do it, but it ensures that this module's output won't be modified-- by it.frame:extensionTag('ref', '[[Category:Instances of Template:If_empty missing arguments]]', {group = 'TrackingCategory'})frame:extensionTag('references', '', {group = 'TrackingCategory'})endreturn vendendendreturn p   </text>"""for arg in rest:if arg:return argreturn ''# ----------------------------------------------------------------------
# String module emulation
# https://en.wikipedia.org/wiki/Module:Stringdef functionParams(args, vars):"""Build a dictionary of var/value from :param: args.Parameters can be either named or unnamed. In the latter case, theirname is taken fron :param: vars."""params = {}index = 1for var in vars:value = args.get(var)if value is None:value = args.get(str(index)) # positional argumentif value is None:value = ''else:index += 1params[var] = valuereturn paramsdef string_sub(args):params = functionParams(args, ('s', 'i', 'j'))s = params.get('s', '')i = int(params.get('i', 1) or 1) # or handles case of '' valuej = int(params.get('j', -1) or -1)if i > 0: i -= 1             # lua is 1-basedif j < 0: j += 1if j == 0: j = len(s)return s[i:j]def string_sublength(args):params = functionParams(args, ('s', 'i', 'len'))s = params.get('s', '')i = int(params.get('i', 1) or 1) - 1 # lua is 1-basedlen = int(params.get('len', 1) or 1)return s[i:i+len]def string_len(args):params = functionParams(args, ('s'))s = params.get('s', '')return len(s)def string_find(args):params = functionParams(args, ('source', 'target', 'start', 'plain'))source = params.get('source', '')pattern = params.get('target', '')start = int('0'+params.get('start', 1)) - 1 # lua is 1-basedplain = int('0'+params.get('plain', 1))if source == '' or pattern == '':return 0if plain:return source.find(pattern, start) + 1 # lua is 1-basedelse:return (re.compile(pattern).search(source, start) or -1) + 1def string_pos(args):params = functionParams(args, ('target', 'pos'))target = params.get('target', '')pos = int(params.get('pos', 1) or 1)if pos > 0:pos -= 1 # The first character has an index value of 1return target[pos]def string_replace(args):params = functionParams(args, ('source', 'pattern', 'replace', 'count', 'plain'))source = params.get('source', '')pattern = params.get('pattern', '')replace = params.get('replace', '')count = int(params.get('count', 0) or 0)plain = int(params.get('plain', 1) or 1)if plain:if count:return source.replace(pattern, replace, count)else:return source.replace(pattern, replace)else:return re.compile(pattern).sub(replace, source, count)def string_rep(args):params = functionParams(args, ('s'))source = params.get('source', '')count = int(params.get('count', '1'))return source * count# ----------------------------------------------------------------------
# Module:Roman
# http://en.wikipedia.org/w/index.php?title=Module:Roman
# Modulo:Numero_romano
# https://it.wikipedia.org/wiki/Modulo:Numero_romanodef roman_main(args):"""Convert first arg to roman numeral if <= 5000 else :return: second arg."""num = int(float(args.get('1')))# Return a message for numbers too big to be expressed in Roman numerals.if 0 > num or num >= 5000:return args.get('2', 'N/A')def toRoman(n, romanNumeralMap):"""convert integer to Roman numeral"""result = ""for integer, numeral in romanNumeralMap:while n >= integer:result += numeraln -= integerreturn result# Find the Roman numerals for numbers 4999 or less.smallRomans = ((1000, "M"),(900, "CM"), (500, "D"), (400, "CD"), (100, "C"),(90, "XC"), (50, "L"), (40, "XL"), (10, "X"),(9, "IX"), (5, "V"), (4, "IV"), (1, "I"))return toRoman(num, smallRomans)# ----------------------------------------------------------------------modules = {'convert': {'convert': lambda x, u, *rest: x + ' ' + u,  # no conversion},'If empty': {'main': if_empty},'String': {'len': string_len,'sub': string_sub,'sublength': string_sublength,'pos': string_pos,'find': string_find,'replace': string_replace,'rep': string_rep,},'Roman': {'main': roman_main},'Numero romano': {'main': roman_main}
}# ----------------------------------------------------------------------
# variablesclass MagicWords(object):"""One copy in each Extractor.@see https://doc.wikimedia.org/mediawiki-core/master/php/MagicWord_8php_source.html"""names = ['!','currentmonth','currentmonth1','currentmonthname','currentmonthnamegen','currentmonthabbrev','currentday','currentday2','currentdayname','currentyear','currenttime','currenthour','localmonth','localmonth1','localmonthname','localmonthnamegen','localmonthabbrev','localday','localday2','localdayname','localyear','localtime','localhour','numberofarticles','numberoffiles','numberofedits','articlepath','pageid','sitename','server','servername','scriptpath','stylepath','pagename','pagenamee','fullpagename','fullpagenamee','namespace','namespacee','namespacenumber','currentweek','currentdow','localweek','localdow','revisionid','revisionday','revisionday2','revisionmonth','revisionmonth1','revisionyear','revisiontimestamp','revisionuser','revisionsize','subpagename','subpagenamee','talkspace','talkspacee','subjectspace','subjectspacee','talkpagename','talkpagenamee','subjectpagename','subjectpagenamee','numberofusers','numberofactiveusers','numberofpages','currentversion','rootpagename','rootpagenamee','basepagename','basepagenamee','currenttimestamp','localtimestamp','directionmark','contentlanguage','numberofadmins','cascadingsources',]def __init__(self):self.values = {'!': '|'}def __getitem__(self, name):return self.values.get(name)def __setitem__(self, name, value):self.values[name] = valueswitches = ('__NOTOC__','__FORCETOC__','__TOC__','__TOC__','__NEWSECTIONLINK__','__NONEWSECTIONLINK__','__NOGALLERY__','__HIDDENCAT__','__NOCONTENTCONVERT__','__NOCC__','__NOTITLECONVERT__','__NOTC__','__START__','__END__','__INDEX__','__NOINDEX__','__STATICREDIRECT__','__DISAMBIG__')magicWordsRE = re.compile('|'.join(MagicWords.switches))# ----------------------------------------------------------------------
# parser functions utilitiesdef ucfirst(string):""":return: a string with just its first character uppercaseWe can't use title() since it coverts all words."""if string:return string[0].upper() + string[1:]else:return ''def lcfirst(string):""":return: a string with its first character lowercase"""if string:if len(string) > 1:return string[0].lower() + string[1:]else:return string.lower()else:return ''def fullyQualifiedTemplateTitle(templateTitle):"""Determine the namespace of the page being included through the templatemechanism"""if templateTitle.startswith(':'):# Leading colon by itself implies main namespace, so strip this colonreturn ucfirst(templateTitle[1:])else:m = re.match('([^:]*)(:.*)', templateTitle)if m:# colon found but not in the first position - check if it# designates a known namespaceprefix = normalizeNamespace(m.group(1))if prefix in options.knownNamespaces:return prefix + ucfirst(m.group(2))# The title of the page being included is NOT in the main namespace and# lacks any other explicit designation of the namespace - therefore, it# is resolved to the Template namespace (that's the default for the# template inclusion mechanism).# This is a defense against pages whose title only contains UTF-8 chars# that are reduced to an empty string. Right now I can think of one such# case - <C2><A0> which represents the non-breaking space.# In this particular case, this page is a redirect to [[Non-nreaking# space]], but having in the system a redirect page with an empty title# causes numerous problems, so we'll live happier without it.if templateTitle:return options.templatePrefix + ucfirst(templateTitle)else:return ''  # caller may log as errordef normalizeNamespace(ns):return ucfirst(ns)# ----------------------------------------------------------------------
# Parser functions
# see http://www.mediawiki.org/wiki/Help:Extension:ParserFunctions
# https://github.com/Wikia/app/blob/dev/extensions/ParserFunctions/ParserFunctions_body.phpclass Infix:"""Infix operators.The calling sequence for the infix is:x |op| y"""def __init__(self, function):self.function = functiondef __ror__(self, other):return Infix(lambda x, self=self, other=other: self.function(other, x))def __or__(self, other):return self.function(other)def __rlshift__(self, other):return Infix(lambda x, self=self, other=other: self.function(other, x))def __rshift__(self, other):return self.function(other)def __call__(self, value1, value2):return self.function(value1, value2)ROUND = Infix(lambda x, y: round(x, y))from math import floor, ceil, pi, e, trunc, exp, log as ln, sin, cos, tan, asin, acos, atandef sharp_expr(extr, expr):"""Tries converting a lua expr into a Python expr."""try:expr = extr.expand(expr)expr = re.sub('(?<![!<>])=', '==', expr) # negative lookbehindexpr = re.sub('mod', '%', expr)          # no \b hereexpr = re.sub('\bdiv\b', '/', expr)expr = re.sub('\bround\b', '|ROUND|', expr)return text_type(eval(expr))except:return '<span class="error">%s</span>' % exprdef sharp_if(extr, testValue, valueIfTrue, valueIfFalse=None, *args):# In theory, we should evaluate the first argument here,# but it was evaluated while evaluating part[0] in expandTemplate().if testValue.strip():# The {{#if:}} function is an if-then-else construct.# The applied condition is: "The condition string is non-empty".valueIfTrue = extr.expand(valueIfTrue.strip()) # evalif valueIfTrue:return valueIfTrueelif valueIfFalse:return extr.expand(valueIfFalse.strip()) # evalreturn ""def sharp_ifeq(extr, lvalue, rvalue, valueIfTrue, valueIfFalse=None, *args):rvalue = rvalue.strip()if rvalue:# lvalue is always evaluatedif lvalue.strip() == rvalue:# The {{#ifeq:}} function is an if-then-else construct. The# applied condition is "is rvalue equal to lvalue". Note that this# does only string comparison while MediaWiki implementation also# supports numerical comparissons.if valueIfTrue:return extr.expand(valueIfTrue.strip())else:if valueIfFalse:return extr.expand(valueIfFalse.strip())return ""def sharp_iferror(extr, test, then='', Else=None, *args):if re.match('<(?:strong|span|p|div)\s(?:[^\s>]*\s+)*?class="(?:[^"\s>]*\s+)*?error(?:\s[^">]*)?"', test):return extr.expand(then.strip())elif Else is None:return test.strip()else:return extr.expand(Else.strip())def sharp_switch(extr, primary, *params):# FIXME: we don't support numeric expressions in primary# {{#switch: comparison string#  | case1 = result1#  | case2#  | case4 = result2#  | 1 | case5 = result3#  | #default = result4# }}primary = primary.strip()found = False  # for fall through casesdefault = Nonervalue = Nonelvalue = ''for param in params:# handle cases like:#  #default = [http://www.perseus.tufts.edu/hopper/text?doc=Perseus...]pair = param.split('=', 1)lvalue = extr.expand(pair[0].strip())rvalue = Noneif len(pair) > 1:# got "="rvalue = extr.expand(pair[1].strip())# check for any of multiple values pipe separatedif found or primary in [v.strip() for v in lvalue.split('|')]:# Found a match, return nowreturn rvalueelif lvalue == '#default':default = rvaluervalue = None  # avoid defaulting to last caseelif lvalue == primary:# If the value matches, set a flag and continuefound = True# Default case# Check if the last item had no = sign, thus specifying the default caseif rvalue is not None:return lvalueelif default is not None:return defaultreturn ''# Extension Scribunto: https://www.mediawiki.org/wiki/Extension:Scribunto
def sharp_invoke(module, function, args):functions = modules.get(module)if functions:funct = functions.get(function)if funct:return text_type(funct(args))return ''parserFunctions = {'#expr': sharp_expr,'#if': sharp_if,'#ifeq': sharp_ifeq,'#iferror': sharp_iferror,'#ifexpr': lambda *args: '',  # not supported'#ifexist': lambda extr, title, ifex, ifnex: extr.expand(ifnex), # assuming title is not present'#rel2abs': lambda *args: '',  # not supported'#switch': sharp_switch,'#language': lambda *args: '', # not supported'#time': lambda *args: '',     # not supported'#timel': lambda *args: '',    # not supported'#titleparts': lambda *args: '', # not supported# This function is used in some pages to construct links# http://meta.wikimedia.org/wiki/Help:URL'urlencode': lambda extr, string, *rest: quote(string.encode('utf-8')),'lc': lambda extr, string, *rest: string.lower() if string else '','lcfirst': lambda extr, string, *rest: lcfirst(string),'uc': lambda extr, string, *rest: string.upper() if string else '','ucfirst': lambda extr, string, *rest: ucfirst(string),'int': lambda extr, string, *rest: text_type(int(string)),}def callParserFunction(functionName, args, extractor):"""Parser functions have similar syntax as templates, except thatthe first argument is everything after the first colon.:return: the result of the invocation, None in case of failure.:param: args not yet expanded (see branching functions).https://www.mediawiki.org/wiki/Help:Extension:ParserFunctions"""try:# https://it.wikipedia.org/wiki/Template:Str_endswith has #InvokefunctionName = functionName.lower()if functionName == '#invoke':module, fun = args[0].strip(), args[1].strip()logging.debug('%*s#invoke %s %s %s', extractor.frame.depth, '', module, fun, args[2:])# special handling of frameif len(args) == 2:# find parameters in frame whose title is the one of the original# template invocationtemplateTitle = fullyQualifiedTemplateTitle(module)if not templateTitle:logging.warn("Template with empty title")params = Noneframe = extractor.framewhile frame:if frame.title == templateTitle:params = frame.argsbreakframe = frame.prevelse:params = [extractor.transform(p) for p in args[2:]] # evaluates themparams = extractor.templateParams(params)ret = sharp_invoke(module, fun, params)logging.debug('%*s<#invoke %s %s %s', extractor.frame.depth, '', module, fun, ret)return retif functionName in parserFunctions:# branching functions use the extractor to selectively evaluate argsreturn parserFunctions[functionName](extractor, *args)except:return ""  # FIXME: fix errorsreturn ""# ----------------------------------------------------------------------
# Expand using WikiMedia API
# import json# def expand(text):
#     """Expand templates invoking MediaWiki API"""
#     text = urlib.urlencodew(text.encode('utf-8'))
#     base = urlbase[:urlbase.rfind('/')]
#     url = base + "/w/api.php?action=expandtemplates&format=json&text=" + text
#     exp = json.loads(urllib.urlopen(url))
#     return exp['expandtemplates']['*']# ----------------------------------------------------------------------
# Extract Template definitionreNoinclude = re.compile(r'<noinclude>(?:.*?)</noinclude>', re.DOTALL)
reIncludeonly = re.compile(r'<includeonly>|</includeonly>', re.DOTALL)def define_template(title, page):"""Adds a template defined in the :param page:.@see https://en.wikipedia.org/wiki/Help:Template#Noinclude.2C_includeonly.2C_and_onlyinclude"""# title = normalizeTitle(title)# sanity check (empty template, e.g. Template:Crude Oil Prices))if not page: return# check for redirectsm = re.match('#REDIRECT.*?\[\[([^\]]*)]]', page[0], re.IGNORECASE)if m:options.redirects[title] = m.group(1)  # normalizeTitle(m.group(1))returntext = unescape(''.join(page))# We're storing template text for future inclusion, therefore,# remove all <noinclude> text and keep all <includeonly> text# (but eliminate <includeonly> tags per se).# However, if <onlyinclude> ... </onlyinclude> parts are present,# then only keep them and discard the rest of the template body.# This is because using <onlyinclude> on a text fragment is# equivalent to enclosing it in <includeonly> tags **AND**# enclosing all the rest of the template body in <noinclude> tags.# remove commentstext = comment.sub('', text)# eliminate <noinclude> fragmentstext = reNoinclude.sub('', text)# eliminate unterminated <noinclude> elementstext = re.sub(r'<noinclude\s*>.*$', '', text, flags=re.DOTALL)text = re.sub(r'<noinclude/>', '', text)onlyincludeAccumulator = ''for m in re.finditer('<onlyinclude>(.*?)</onlyinclude>', text, re.DOTALL):onlyincludeAccumulator += m.group(1)if onlyincludeAccumulator:text = onlyincludeAccumulatorelse:text = reIncludeonly.sub('', text)if text:if title in options.templates:logging.warn('Redefining: %s', title)options.templates[title] = text# ----------------------------------------------------------------------def dropNested(text, openDelim, closeDelim):"""A matching function for nested expressions, e.g. namespaces and tables."""openRE = re.compile(openDelim, re.IGNORECASE)closeRE = re.compile(closeDelim, re.IGNORECASE)# partition text in separate blocks { } { }spans = []                  # pairs (s, e) for each partitionnest = 0                    # nesting levelstart = openRE.search(text, 0)if not start:return textend = closeRE.search(text, start.end())next = startwhile end:next = openRE.search(text, next.end())if not next:            # terminationwhile nest:         # close all pendingnest -= 1end0 = closeRE.search(text, end.end())if end0:end = end0else:breakspans.append((start.start(), end.end()))breakwhile end.end() < next.start():# { } {if nest:nest -= 1# try closing morelast = end.end()end = closeRE.search(text, end.end())if not end:     # unbalancedif spans:span = (spans[0][0], last)else:span = (start.start(), last)spans = [span]breakelse:spans.append((start.start(), end.end()))# advance start, find next closestart = nextend = closeRE.search(text, next.end())break           # { }if next != start:# { { }nest += 1# collect text outside partitionsreturn dropSpans(spans, text)def dropSpans(spans, text):"""Drop from text the blocks identified in :param spans:, possibly nested."""spans.sort()res = ''offset = 0for s, e in spans:if offset <= s:         # handle nestingif offset < s:res += text[offset:s]offset = eres += text[offset:]return res# ----------------------------------------------------------------------
# WikiLinks# May be nested [[File:..|..[[..]]..|..]], [[Category:...]], etc.
# Also: [[Help:IPA for Catalan|[andora]]]def replaceInternalLinks(text):"""Replaces internal links of the form:[[title |...|label]]trailwith title concatenated with trail, when present, e.g. 's' for plural.See https://www.mediawiki.org/wiki/Help:Links#Internal_links"""# call this after removal of external links, so we need not worry about# triple closing ]]].cur = 0res = ''for s, e in findBalanced(text):m = tailRE.match(text, e)if m:trail = m.group(0)end = m.end()else:trail = ''end = einner = text[s + 2:e - 2]# find first |pipe = inner.find('|')if pipe < 0:title = innerlabel = titleelse:title = inner[:pipe].rstrip()# find last |curp = pipe + 1for s1, e1 in findBalanced(inner):last = inner.rfind('|', curp, s1)if last >= 0:pipe = last  # advancecurp = e1label = inner[pipe + 1:].strip()res += text[cur:s] + makeInternalLink(title, label) + trailcur = endreturn res + text[cur:]# the official version is a method in class Parser, similar to this:
# def replaceInternalLinks2(text):
#     global wgExtraInterlanguageLinkPrefixes#     # the % is needed to support urlencoded titles as well
#     tc = Title::legalChars() + '#%'
#     # Match a link having the form [[namespace:link|alternate]]trail
#     e1 = re.compile("([%s]+)(?:\\|(.+?))?]](.*)" % tc, re.S | re.D)
#     # Match cases where there is no "]]", which might still be images
#     e1_img = re.compile("([%s]+)\\|(.*)" % tc, re.S | re.D)#     holders = LinkHolderArray(self)#     # split the entire text string on occurrences of [[
#     iterBrackets = re.compile('[[').finditer(text)#     m in iterBrackets.next()
#     # get the first element (all text up to first [[)
#     s = text[:m.start()]
#     cur = m.end()#     line = s#     useLinkPrefixExtension = self.getTargetLanguage().linkPrefixExtension()
#     e2 = None
#     if useLinkPrefixExtension:
#         # Match the end of a line for a word that is not followed by whitespace,
#         # e.g. in the case of "The Arab al[[Razi]]",  "al" will be matched
#         global wgContLang
#         charset = wgContLang.linkPrefixCharset()
#         e2 = re.compile("((?>.*[^charset]|))(.+)", re.S | re.D | re.U)#     if self.mTitle is None:
#         raise MWException(__METHOD__ + ": \self.mTitle is null\n")#     nottalk = not self.mTitle.isTalkPage()#     if useLinkPrefixExtension:
#         m = e2.match(s)
#         if m:
#             first_prefix = m.group(2)
#         else:
#             first_prefix = false
#     else:
#         prefix = ''#     useSubpages = self.areSubpagesAllowed()#     for m in iterBrackets:
#         line = text[cur:m.start()]
#         cur = m.end()#         # TODO: Check for excessive memory usage#         if useLinkPrefixExtension:
#             m = e2.match(e2)
#             if m:
#                 prefix = m.group(2)
#                 s = m.group(1)
#             else:
#                 prefix = ''
#             # first link
#             if first_prefix:
#                 prefix = first_prefix
#                 first_prefix = False#         might_be_img = False#         m = e1.match(line)
#         if m: # page with normal label or alt
#             label = m.group(2)
#             # If we get a ] at the beginning of m.group(3) that means we have a link that is something like:
#             # [[Image:Foo.jpg|[http://example.com desc]]] <- having three ] in a row fucks up,
#             # the real problem is with the e1 regex
#             # See bug 1300.
#             #
#             # Still some problems for cases where the ] is meant to be outside punctuation,
#             # and no image is in sight. See bug 2095.
#             #
#             if label and m.group(3)[0] == ']' and '[' in label:
#                 label += ']' # so that replaceExternalLinks(label) works later
#                 m.group(3) = m.group(3)[1:]
#             # fix up urlencoded title texts
#             if '%' in m.group(1):
#                 # Should anchors '#' also be rejected?
#                 m.group(1) = str_replace(array('<', '>'), array('&lt', '&gt'), rawurldecode(m.group(1)))
#             trail = m.group(3)
#         else:
#             m = e1_img.match(line):
#             if m:
#                 # Invalid, but might be an image with a link in its caption
#                 might_be_img = true
#                 label = m.group(2)
#                 if '%' in m.group(1):
#                     m.group(1) = rawurldecode(m.group(1))
#                 trail = ""
#             else:     # Invalid form; output directly
#                 s += prefix + '[[' + line
#                 continue#         origLink = m.group(1)#         # Dont allow internal links to pages containing
#         # PROTO: where PROTO is a valid URL protocol these
#         # should be external links.
#         if (preg_match('/^(?i:' + self.mUrlProtocols + ')/', origLink)) {#             s += prefix + '[[' + line
#             continue
#         }#         # Make subpage if necessary
#         if useSubpages:
#             link = self.maybeDoSubpageLink(origLink, label)
#         else:
#             link = origLink#         noforce = origLink[0] != ':'
#         if not noforce:
#             # Strip off leading ':'
#             link = link[1:]#         nt = Title::newFromText(self.mStripState.unstripNoWiki(link))
#         if nt is None:
#             s += prefix + '[[' + line
#             continue#         ns = nt.getNamespace()
#         iw = nt.getInterwiki()#         if might_be_img {    # if this is actually an invalid link
#             if (ns == NS_FILE and noforce) { # but might be an image
#                 found = False
#                 while True:
#                     # look at the next 'line' to see if we can close it there
#                     next_line = iterBrakets.next()
#                     if not next_line:
#                         break
#                     m = explode(']]', next_line, 3)
#                     if m.lastindex == 3:
#                         # the first ]] closes the inner link, the second the image
#                         found = True
#                         label += "[[%s]]%s" % (m.group(0), m.group(1))
#                         trail = m.group(2)
#                         break
#                     elif m.lastindex == 2:
#                         # if there is exactly one ]] that is fine, we will keep looking
#                         label += "[[{m[0]}]]{m.group(1)}"
#                     else:
#                         # if next_line is invalid too, we need look no further
#                         label += '[[' + next_line
#                         break
#                 if not found:
#                     # we couldnt find the end of this imageLink, so output it raw
#                     # but dont ignore what might be perfectly normal links in the text we ve examined
#                     holders.merge(self.replaceInternalLinks2(label))
#                     s += "{prefix}[[%s|%s" % (link, text)
#                     # note: no trail, because without an end, there *is* no trail
#                     continue
#             } else: # it is not an image, so output it raw
#                 s += "{prefix}[[%s|%s" % (link, text)
#                 # note: no trail, because without an end, there *is* no trail
#                      continue
#         }#         wasblank = (text == '')
#         if wasblank:
#             text = link
#         else:
#             # Bug 4598 madness.  Handle the quotes only if they come from the alternate part
#             # [[Lista d''e paise d''o munno]] . <a href="...">Lista d''e paise d''o munno</a>
#             # [[Criticism of Harry Potter|Criticism of ''Harry Potter'']]
#             #    . <a href="Criticism of Harry Potter">Criticism of <i>Harry Potter</i></a>
#             text = self.doQuotes(text)#         # Link not escaped by : , create the various objects
#         if noforce and not nt.wasLocalInterwiki():
#             # Interwikis
#             if iw and mOptions.getInterwikiMagic() and nottalk and (
#                     Language::fetchLanguageName(iw, None, 'mw') or
#                     in_array(iw, wgExtraInterlanguageLinkPrefixes)):
#                 # Bug 24502: filter duplicates
#                 if iw not in mLangLinkLanguages:
#                     self.mLangLinkLanguages[iw] = True
#                     self.mOutput.addLanguageLink(nt.getFullText())#                 s = rstrip(s + prefix)
#                 s += strip(trail, "\n") == '' ? '': prefix + trail
#                 continue#             if ns == NS_FILE:
#                 if not wfIsBadImage(nt.getDBkey(), self.mTitle):
#                     if wasblank:
#                         # if no parameters were passed, text
#                         # becomes something like "File:Foo.png",
#                         # which we dont want to pass on to the
#                         # image generator
#                         text = ''
#                     else:
#                         # recursively parse links inside the image caption
#                         # actually, this will parse them in any other parameters, too,
#                         # but it might be hard to fix that, and it doesnt matter ATM
#                         text = self.replaceExternalLinks(text)
#                         holders.merge(self.replaceInternalLinks2(text))
#                     # cloak any absolute URLs inside the image markup, so replaceExternalLinks() wont touch them
#                     s += prefix + self.armorLinks(
#                         self.makeImage(nt, text, holders)) + trail
#                 else:
#                     s += prefix + trail
#                 continue#             if ns == NS_CATEGORY:
#                 s = rstrip(s + "\n") # bug 87#                 if wasblank:
#                     sortkey = self.getDefaultSort()
#                 else:
#                     sortkey = text
#                 sortkey = Sanitizer::decodeCharReferences(sortkey)
#                 sortkey = str_replace("\n", '', sortkey)
#                 sortkey = self.getConverterLanguage().convertCategoryKey(sortkey)
#                 self.mOutput.addCategory(nt.getDBkey(), sortkey)#                 s += strip(prefix + trail, "\n") == '' ? '' : prefix + trail#                 continue
#             }
#         }#         # Self-link checking. For some languages, variants of the title are checked in
#         # LinkHolderArray::doVariants() to allow batching the existence checks necessary
#         # for linking to a different variant.
#         if ns != NS_SPECIAL and nt.equals(self.mTitle) and !nt.hasFragment():
#             s += prefix + Linker::makeSelfLinkObj(nt, text, '', trail)
#                  continue#         # NS_MEDIA is a pseudo-namespace for linking directly to a file
#         # @todo FIXME: Should do batch file existence checks, see comment below
#         if ns == NS_MEDIA:
#             # Give extensions a chance to select the file revision for us
#             options = []
#             descQuery = False
#             Hooks::run('BeforeParserFetchFileAndTitle',
#                        [this, nt, &options, &descQuery])
#             # Fetch and register the file (file title may be different via hooks)
#             file, nt = self.fetchFileAndTitle(nt, options)
#             # Cloak with NOPARSE to avoid replacement in replaceExternalLinks
#             s += prefix + self.armorLinks(
#                 Linker::makeMediaLinkFile(nt, file, text)) + trail
#             continue#         # Some titles, such as valid special pages or files in foreign repos, should
#         # be shown as bluelinks even though they are not included in the page table
#         #
#         # @todo FIXME: isAlwaysKnown() can be expensive for file links; we should really do
#         # batch file existence checks for NS_FILE and NS_MEDIA
#         if iw == '' and nt.isAlwaysKnown():
#             self.mOutput.addLink(nt)
#             s += self.makeKnownLinkHolder(nt, text, array(), trail, prefix)
#         else:
#             # Links will be added to the output link list after checking
#             s += holders.makeHolder(nt, text, array(), trail, prefix)
#     }
#     return holdersdef makeInternalLink(title, label):colon = title.find(':')if colon > 0 and title[:colon] not in options.acceptedNamespaces:return ''if colon == 0:# drop also :File:colon2 = title.find(':', colon + 1)if colon2 > 1 and title[colon + 1:colon2] not in options.acceptedNamespaces:return ''if options.keepLinks:return '<a href="%s">%s</a>' % (quote(title.encode('utf-8')), label)else:return label# ----------------------------------------------------------------------
# External links# from: https://doc.wikimedia.org/mediawiki-core/master/php/DefaultSettings_8php_source.htmlwgUrlProtocols = ['bitcoin:', 'ftp://', 'ftps://', 'geo:', 'git://', 'gopher://', 'http://','https://', 'irc://', 'ircs://', 'magnet:', 'mailto:', 'mms://', 'news:','nntp://', 'redis://', 'sftp://', 'sip:', 'sips:', 'sms:', 'ssh://','svn://', 'tel:', 'telnet://', 'urn:', 'worldwind://', 'xmpp:', '//'
]# from: https://doc.wikimedia.org/mediawiki-core/master/php/Parser_8php_source.html# Constants needed for external link processing
# Everything except bracket, space, or control characters
# \p{Zs} is unicode 'separator, space' category. It covers the space 0x20
# as well as U+3000 is IDEOGRAPHIC SPACE for bug 19052
EXT_LINK_URL_CLASS = r'[^][<>"\x00-\x20\x7F\s]'
ANCHOR_CLASS = r'[^][\x00-\x08\x0a-\x1F]'
ExtLinkBracketedRegex = re.compile('\[(((?i)' + '|'.join(wgUrlProtocols) + ')' + EXT_LINK_URL_CLASS + r'+)' +r'\s*((?:' + ANCHOR_CLASS + r'|\[\[' + ANCHOR_CLASS + r'+\]\])' + r'*?)\]',re.S | re.U)
# A simpler alternative:
# ExtLinkBracketedRegex = re.compile(r'\[(.*?)\](?!])')EXT_IMAGE_REGEX = re.compile(r"""^(http://|https://)([^][<>"\x00-\x20\x7F\s]+)/([A-Za-z0-9_.,~%\-+&;#*?!=()@\x80-\xFF]+)\.((?i)gif|png|jpg|jpeg)$""",re.X | re.S | re.U)def replaceExternalLinks(text):"""https://www.mediawiki.org/wiki/Help:Links#External_links[URL anchor text]"""s = ''cur = 0for m in ExtLinkBracketedRegex.finditer(text):s += text[cur:m.start()]cur = m.end()url = m.group(1)label = m.group(3)# # The characters '<' and '>' (which were escaped by# # removeHTMLtags()) should not be included in# # URLs, per RFC 2396.# m2 = re.search('&(lt|gt);', url)# if m2:#     link = url[m2.end():] + ' ' + link#     url = url[0:m2.end()]# If the link text is an image URL, replace it with an <img> tag# This happened by accident in the original parser, but some people used it extensivelym = EXT_IMAGE_REGEX.match(label)if m:label = makeExternalImage(label)# Use the encoded URL# This means that users can paste URLs directly into the text# Funny characters like ö aren't valid in URLs anyway# This was changed in August 2004s += makeExternalLink(url, label)  # + trailreturn s + text[cur:]def makeExternalLink(url, anchor):"""Function applied to wikiLinks"""if options.keepLinks:return '<a href="%s">%s</a>' % (quote(url.encode('utf-8')), anchor)else:return anchordef makeExternalImage(url, alt=''):if options.keepLinks:return '<img src="%s" alt="%s">' % (url, alt)else:return alt# ----------------------------------------------------------------------# match tail after wikilink
tailRE = re.compile('\w+')syntaxhighlight = re.compile('&lt;syntaxhighlight .*?&gt;(.*?)&lt;/syntaxhighlight&gt;', re.DOTALL)# skip level 1, it is page name level
section = re.compile(r'(==+)\s*(.*?)\s*\1')listOpen = {'*': '<ul>', '#': '<ol>', ';': '<dl>', ':': '<dl>'}
listClose = {'*': '</ul>', '#': '</ol>', ';': '</dl>', ':': '</dl>'}
listItem = {'*': '<li>%s</li>', '#': '<li>%s</<li>', ';': '<dt>%s</dt>',':': '<dd>%s</dd>'}def compact(text):"""Deal with headers, lists, empty sections, residuals of tables.:param text: convert to HTML."""page = []             # list of paragraphheaders = {}          # Headers for unfilled sectionsemptySection = False  # empty sections are discardedlistLevel = []        # nesting of listslistCount = []        # count of each list (it should be always in the same length of listLevel)for line in text.split('\n'):if not line:            # collapse empty lines# if there is an opening list, close it if we see an empty lineif len(listLevel):page.append(line)if options.toHTML:for c in reversed(listLevel):page.append(listClose[c])listLevel = []listCount = []emptySection = Falseelif page and page[-1]:page.append('')continue# Handle section titlesm = section.match(line)if m:title = m.group(2)lev = len(m.group(1)) # header levelif options.toHTML:page.append("<h%d>%s</h%d>" % (lev, title, lev))if title and title[-1] not in '!?':title += '.'    # terminate sentence.headers[lev] = title# drop previous headersfor i in list(headers.keys()):if i > lev:del headers[i]emptySection = TruelistLevel = []listCount = []continue# Handle page titleelif line.startswith('++'):title = line[2:-2]if title:if title[-1] not in '!?':title += '.'page.append(title)# handle indentselif line[0] == ':':# page.append(line.lstrip(':*#;'))continue# handle listselif line[0] in '*#;:':i = 0# c: current level char# n: next level charfor c, n in zip_longest(listLevel, line, fillvalue=''):if not n or n not in '*#;:': # shorter or differentif c:if options.toHTML:page.append(listClose[c])listLevel = listLevel[:-1]listCount = listCount[:-1]continueelse:break# n != ''if c != n and (not c or (c not in ';:' and n not in ';:')):if c:# close levelif options.toHTML:page.append(listClose[c])listLevel = listLevel[:-1]listCount = listCount[:-1]listLevel += nlistCount.append(0)if options.toHTML:page.append(listOpen[n])i += 1n = line[i - 1]  # last list charline = line[i:].strip()if line:  # FIXME: n is '"'if options.keepLists:if options.keepSections:# emit open sectionsitems = sorted(headers.items())for _, v in items:page.append("Section::::" + v)headers.clear()# use item count for #-lineslistCount[i - 1] += 1bullet = 'BULLET::::%d. ' % listCount[i - 1] if n == '#' else 'BULLET::::- 'page.append('{0:{1}s}'.format(bullet, len(listLevel)) + line)elif options.toHTML:if n not in listItem: n = '*'page.append(listItem[n] % line)elif len(listLevel):if options.toHTML:for c in reversed(listLevel):page.append(listClose[c])listLevel = []listCount = []page.append(line)# Drop residuals of listselif line[0] in '{|' or line[-1] == '}':continue# Drop irrelevant lineselif (line[0] == '(' and line[-1] == ')') or line.strip('.-') == '':continueelif len(headers):if options.keepSections:items = sorted(headers.items())for i, v in items:page.append("Section::::" + v)headers.clear()page.append(line)  # first lineemptySection = Falseelif not emptySection:# Drop preformattedif line[0] != ' ':  # dangerouspage.append(line)return pagedef handle_unicode(entity):numeric_code = int(entity[2:-1])if numeric_code >= 0x10000: return ''return chr(numeric_code)# ------------------------------------------------------------------------------
# Outputclass NextFile(object):"""Synchronous generation of next available file name."""filesPerDir = 100def __init__(self, path_name):self.path_name = path_nameself.dir_index = -1self.file_index = -1def __next__(self):self.file_index = (self.file_index + 1) % NextFile.filesPerDirif self.file_index == 0:self.dir_index += 1dirname = self._dirname()if not os.path.isdir(dirname):os.makedirs(dirname)return self._filepath()next = __next__def _dirname(self):char1 = self.dir_index % 26char2 = self.dir_index // 26 % 26return os.path.join(self.path_name, '%c%c' % (ord('A') + char2, ord('A') + char1))def _filepath(self):return '%s/wiki_%02d' % (self._dirname(), self.file_index)class OutputSplitter(object):"""File-like object, that splits output to multiple files of a given max size."""def __init__(self, nextFile, max_file_size=0, compress=True):""":param nextFile: a NextFile object from which to obtain filenamesto use.:param max_file_size: the maximum size of each file.:para compress: whether to write data with bzip compression."""self.nextFile = nextFileself.compress = compressself.max_file_size = max_file_sizeself.file = self.open(next(self.nextFile))def reserve(self, size):if self.file.tell() + size > self.max_file_size:self.close()self.file = self.open(next(self.nextFile))def write(self, data):self.reserve(len(data))self.file.write(data)def close(self):self.file.close()def open(self, filename):if self.compress:return bz2.BZ2File(filename + '.bz2', 'w')else:return open(filename, 'wb')# ----------------------------------------------------------------------
# READERtagRE = re.compile(r'(.*?)<(/?\w+)[^>]*?>(?:([^<]*)(<.*?>)?)?')
#                    1     2               3      4
keyRE = re.compile(r'key="(\d*)"')
catRE = re.compile(r'\[\[Category:([^\|]+).*\]\].*')  # capture the category name [[Category:Category name|Sortkey]]"def load_templates(file, output_file=None):"""Load templates from :param file:.:param output_file: file where to save templates and modules."""options.templatePrefix = options.templateNamespace + ':'options.modulePrefix = options.moduleNamespace + ':'if output_file:output = codecs.open(output_file, 'wb', 'utf-8')for page_count, page_data in enumerate(pages_from(file)):id, revid, title, ns,catSet, page = page_dataif not output_file and (not options.templateNamespace ornot options.moduleNamespace):  # do not know it yet# reconstruct templateNamespace and moduleNamespace from the first titleif ns in templateKeys:colon = title.find(':')if colon > 1:if ns == '10':options.templateNamespace = title[:colon]options.templatePrefix = title[:colon + 1]elif ns == '828':options.moduleNamespace = title[:colon]options.modulePrefix = title[:colon + 1]if ns in templateKeys:text = ''.join(page)define_template(title, text)# save templates and modules to fileif output_file:output.write('<page>\n')output.write('   <title>%s</title>\n' % title)output.write('   <ns>%s</ns>\n' % ns)output.write('   <id>%s</id>\n' % id)output.write('   <text>')for line in page:output.write(line)output.write('   </text>\n')output.write('</page>\n')if page_count and page_count % 100000 == 0:logging.info("Preprocessed %d pages", page_count)if output_file:output.close()logging.info("Saved %d templates to '%s'", len(options.templates), output_file)def pages_from(input):"""Scans input extracting pages.:return: (id, revid, title, namespace key, page), page is a list of lines."""# we collect individual lines, since str.join() is significantly faster# than concatenationpage = []id = Nonens = '0'last_id = Nonerevid = NoneinText = Falseredirect = Falsetitle = Nonefor line in input:if not isinstance(line, text_type): line = line.decode('utf-8')if '<' not in line:  # faster than doing re.search()if inText:page.append(line)# extract categoriesif line.lstrip().startswith('[[Category:'):mCat = catRE.search(line)if mCat:catSet.add(mCat.group(1))continuem = tagRE.search(line)if not m:continuetag = m.group(2)if tag == 'page':page = []catSet = set()redirect = Falseelif tag == 'id' and not id:id = m.group(3)elif tag == 'id' and not revid:revid = m.group(3)elif tag == 'title':title = m.group(3)elif tag == 'ns':ns = m.group(3)elif tag == 'redirect':redirect = Trueelif tag == 'text':if m.lastindex == 3 and line[m.start(3)-2] == '/': # self closing# <text xml:space="preserve" />continueinText = Trueline = line[m.start(3):m.end(3)]page.append(line)if m.lastindex == 4:  # open-closeinText = Falseelif tag == '/text':if m.group(1):page.append(m.group(1))inText = Falseelif inText:page.append(line)elif tag == '/page':if id != last_id and not redirect:yield (id, revid, title, ns,catSet, page)last_id = idns = '0'id = Nonerevid = Nonetitle = Nonepage = []def process_dump(input_file, template_file, out_file, file_size, file_compress,process_count):""":param input_file: name of the wikipedia dump file; '-' to read from stdin:param template_file: optional file with template definitions.:param out_file: directory where to store extracted data, or '-' for stdout:param file_size: max size of each extracted file, or None for no max (one file):param file_compress: whether to compress files with bzip.:param process_count: number of extraction processes to spawn."""if input_file == '-':input = sys.stdinelse:input = fileinput.FileInput(input_file, openhook=fileinput.hook_compressed)# collect siteinfofor line in input:# When an input file is .bz2 or .gz, line can be a bytes even in Python 3.if not isinstance(line, text_type): line = line.decode('utf-8')m = tagRE.search(line)if not m:continuetag = m.group(2)if tag == 'base':# discover urlbase from the xml dump file# /mediawiki/siteinfo/basebase = m.group(3)options.urlbase = base[:base.rfind("/")]elif tag == 'namespace':mk = keyRE.search(line)if mk:nsid = ''.join(mk.groups())else:nsid = ''options.knownNamespaces[m.group(3)] = nsidif re.search('key="10"', line):options.templateNamespace = m.group(3)options.templatePrefix = options.templateNamespace + ':'elif re.search('key="828"', line):options.moduleNamespace = m.group(3)options.modulePrefix = options.moduleNamespace + ':'elif tag == '/siteinfo':breakif options.expand_templates:# preprocesstemplate_load_start = default_timer()if template_file:if os.path.exists(template_file):logging.info("Loading template definitions from: %s", template_file)# can't use with here:file = fileinput.FileInput(template_file,openhook=fileinput.hook_compressed)load_templates(file)file.close()else:if input_file == '-':# can't scan then reset stdin; must error w/ suggestion to specify template_fileraise ValueError("to use templates with stdin dump, must supply explicit template-file")logging.info("Preprocessing '%s' to collect template definitions: this may take some time.", input_file)load_templates(input, template_file)input.close()input = fileinput.FileInput(input_file, openhook=fileinput.hook_compressed)template_load_elapsed = default_timer() - template_load_startlogging.info("Loaded %d templates in %.1fs", len(options.templates), template_load_elapsed)# process pageslogging.info("Starting page extraction from %s.", input_file)extract_start = default_timer()# Parallel Map/Reduce:# - pages to be processed are dispatched to workers# - a reduce process collects the results, sort them and print them.process_count = max(1, process_count)maxsize = 10 * process_count# output queueoutput_queue = Queue(maxsize=maxsize)if out_file == '-':out_file = Noneworker_count = process_count# load balancingmax_spool_length = 10000spool_length = Value('i', 0, lock=False)# reduce job that sorts and prints outputreduce = Process(target=reduce_process,args=(options, output_queue, spool_length,out_file, file_size, file_compress))reduce.start()# initialize jobs queuejobs_queue = Queue(maxsize=maxsize)# start worker processeslogging.info("Using %d extract processes.", worker_count)workers = []for i in range(worker_count):extractor = Process(target=extract_process,args=(options, i, jobs_queue, output_queue))extractor.daemon = True  # only live while parent process livesextractor.start()workers.append(extractor)# Mapper processpage_num = 0for page_data in pages_from(input):id, revid, title, ns, catSet, page = page_dataif keepPage(ns, catSet, page):# slow downdelay = 0if spool_length.value > max_spool_length:# reduce to 10%while spool_length.value > max_spool_length/10:time.sleep(10)delay += 10if delay:logging.info('Delay %ds', delay)job = (id, revid, title, page, page_num)jobs_queue.put(job) # goes to any available extract_processpage_num += 1page = None             # free memoryinput.close()# signal terminationfor _ in workers:jobs_queue.put(None)# wait for workers to terminatefor w in workers:w.join()# signal end of work to reduce processoutput_queue.put(None)# wait for it to finishreduce.join()extract_duration = default_timer() - extract_startextract_rate = page_num / extract_durationlogging.info("Finished %d-process extraction of %d articles in %.1fs (%.1f art/s)",process_count, page_num, extract_duration, extract_rate)logging.info("total of page: %d, total of articl page: %d; total of used articl page: %d" % (g_page_total, g_page_articl_total,g_page_articl_used_total))# ----------------------------------------------------------------------
# Multiprocess supportdef extract_process(opts, i, jobs_queue, output_queue):"""Pull tuples of raw page content, do CPU/regex-heavy fixup, push finished text:param i: process id.:param jobs_queue: where to get jobs.:param output_queue: where to queue extracted text for output."""global optionsoptions = optscreateLogger(options.quiet, options.debug, options.log_file)out = StringIO()                 # memory bufferwhile True:job = jobs_queue.get()  # job is (id, title, page, page_num)if job:id, revid, title, page, page_num = jobtry:e = Extractor(*job[:4]) # (id, revid, title, page)page = None              # free memorye.extract(out)text = out.getvalue()except:text = ''logging.exception('Processing page: %s %s', id, title)output_queue.put((page_num, text))out.truncate(0)out.seek(0)else:logging.debug('Quit extractor')breakout.close()report_period = 10000           # progress report period
def reduce_process(opts, output_queue, spool_length,out_file=None, file_size=0, file_compress=True):"""Pull finished article text, write series of files (or stdout):param opts: global parameters.:param output_queue: text to be output.:param spool_length: spool length.:param out_file: filename where to print.:param file_size: max file size.:param file_compress: whether to compress output."""global optionsoptions = optscreateLogger(options.quiet, options.debug, options.log_file)if out_file:nextFile = NextFile(out_file)output = OutputSplitter(nextFile, file_size, file_compress)else:output = sys.stdout if PY2 else sys.stdout.bufferif file_compress:logging.warn("writing to stdout, so no output compression (use an external tool)")interval_start = default_timer()# FIXME: use a heapspool = {}        # collected pagesnext_page = 0     # sequence numbering of pagewhile True:if next_page in spool:output.write(spool.pop(next_page).encode('utf-8'))next_page += 1# tell mapper our load:spool_length.value = len(spool)# progress reportif next_page % report_period == 0:interval_rate = report_period / (default_timer() - interval_start)logging.info("Extracted %d articles (%.1f art/s)",next_page, interval_rate)interval_start = default_timer()else:# mapper puts None to signal finishpair = output_queue.get()if not pair:breakpage_num, text = pairspool[page_num] = text# tell mapper our load:spool_length.value = len(spool)# FIXME: if an extractor dies, process stalls; the other processes# continue to produce pairs, filling up memory.if len(spool) > 200:logging.debug('Collected %d, waiting: %d, %d', len(spool),next_page, next_page == page_num)if output != sys.stdout:output.close()# ----------------------------------------------------------------------# Minimum size of output files
minFileSize = 200 * 1024def main():parser = argparse.ArgumentParser(prog=os.path.basename(sys.argv[0]),formatter_class=argparse.RawDescriptionHelpFormatter,description=__doc__)parser.add_argument("input",help="XML wiki dump file")groupO = parser.add_argument_group('Output')groupO.add_argument("-o", "--output", default="text",help="directory for extracted files (or '-' for dumping to stdout)")groupO.add_argument("-b", "--bytes", default="1M",help="maximum bytes per output file (default %(default)s)",metavar="n[KMG]")groupO.add_argument("-c", "--compress", action="store_true",help="compress output files using bzip")groupO.add_argument("--json", action="store_true",help="write output in json format instead of the default one")groupP = parser.add_argument_group('Processing')groupP.add_argument("--html", action="store_true",help="produce HTML output, subsumes --links")groupP.add_argument("-l", "--links", action="store_true",help="preserve links")groupP.add_argument("-s", "--sections", action="store_true",help="preserve sections")groupP.add_argument("--lists", action="store_true",help="preserve lists")groupP.add_argument("-ns", "--namespaces", default="", metavar="ns1,ns2",help="accepted namespaces in links")groupP.add_argument("--templates",help="use or create file containing templates")groupP.add_argument("--no_templates", action="store_false",help="Do not expand templates")groupP.add_argument("-r", "--revision", action="store_true", default=options.print_revision,help="Include the document revision id (default=%(default)s)")groupP.add_argument("--min_text_length", type=int, default=options.min_text_length,help="Minimum expanded text length required to write document (default=%(default)s)")groupP.add_argument("--filter_disambig_pages", action="store_true", default=options.filter_disambig_pages,help="Remove pages from output that contain disabmiguation markup (default=%(default)s)")groupP.add_argument("-it", "--ignored_tags", default="", metavar="abbr,b,big",help="comma separated list of tags that will be dropped, keeping their content")groupP.add_argument("-de", "--discard_elements", default="", metavar="gallery,timeline,noinclude",help="comma separated list of elements that will be removed from the article text")groupP.add_argument("--keep_tables", action="store_true", default=options.keep_tables,help="Preserve tables in the output article text (default=%(default)s)")default_process_count = max(1, cpu_count() - 1)parser.add_argument("--processes", type=int, default=default_process_count,help="Number of processes to use (default %(default)s)")groupS = parser.add_argument_group('Special')groupS.add_argument("-q", "--quiet", action="store_true",help="suppress reporting progress info")groupS.add_argument("--debug", action="store_true",help="print debug info")groupS.add_argument("-a", "--article", action="store_true",help="analyze a file containing a single article (debug option)")groupS.add_argument("--log_file",help="path to save the log info")groupS.add_argument("-v", "--version", action="version",version='%(prog)s ' + version,help="print program version")groupP.add_argument("--filter_category",help="specify the file that listing the Categories you want to include or exclude. One line for"" one category. starting with: 1) '#' comment, ignored; 2) '^' exclude; Note: excluding has higher priority than including")args = parser.parse_args()options.keepLinks = args.linksoptions.keepSections = args.sectionsoptions.keepLists = args.listsoptions.toHTML = args.htmloptions.write_json = args.jsonoptions.print_revision = args.revisionoptions.min_text_length = args.min_text_lengthif args.html:options.keepLinks = Trueoptions.expand_templates = args.no_templatesoptions.filter_disambig_pages = args.filter_disambig_pagesoptions.keep_tables = args.keep_tablestry:power = 'kmg'.find(args.bytes[-1].lower()) + 1file_size = int(args.bytes[:-1]) * 1024 ** powerif file_size < minFileSize:raise ValueError()except ValueError:logging.error('Insufficient or invalid size: %s', args.bytes)returnif args.namespaces:options.acceptedNamespaces = set(args.namespaces.split(','))# ignoredTags and discardElemets have default values already supplied, if passed in the defaults are overwrittenif args.ignored_tags:ignoredTags = set(args.ignored_tags.split(','))else:ignoredTags = ['abbr', 'b', 'big', 'blockquote', 'center', 'cite', 'em','font', 'h1', 'h2', 'h3', 'h4', 'hiero', 'i', 'kbd','p', 'plaintext', 's', 'span', 'strike', 'strong','tt', 'u', 'var']# 'a' tag is handled separatelyfor tag in ignoredTags:ignoreTag(tag)if args.discard_elements:options.discardElements = set(args.discard_elements.split(','))FORMAT = '%(levelname)s: %(message)s'logging.basicConfig(format=FORMAT)options.quiet = args.quietoptions.debug = args.debugoptions.log_file = args.log_filecreateLogger(options.quiet, options.debug, options.log_file)input_file = args.inputif not options.keepLinks:ignoreTag('a')# sharing cache of parser templates is too slow:# manager = Manager()# templateCache = manager.dict()if args.article:if args.templates:if os.path.exists(args.templates):with open(args.templates) as file:load_templates(file)file = fileinput.FileInput(input_file, openhook=fileinput.hook_compressed)for page_data in pages_from(file):id, revid, title, ns,catSet, page = page_dataExtractor(id, revid, title, page).extract(sys.stdout)file.close()returnoutput_path = args.outputif output_path != '-' and not os.path.isdir(output_path):try:os.makedirs(output_path)except:logging.error('Could not create: %s', output_path)returnfilter_category = args.filter_categoryif (filter_category != None and len(filter_category)>0):with open(filter_category) as f:error_cnt = 0for line in f.readlines():try:line = str(line.strip())if line.startswith('#') or len(line) == 0:continue;elif line.startswith('^'):options.filter_category_exclude.add(line.lstrip('^'))else:options.filter_category_include.add(line)except Exception as e:error_cnt += 1print(u"Category not in utf8, ignored. error cnt %d:\t%s" % (error_cnt,e))print(line)logging.info("Excluding categories:",)logging.info(str(options.filter_category_exclude))logging.info("Including categories:")logging.info(str(len(options.filter_category_include)))process_dump(input_file, args.templates, output_path, file_size,args.compress, args.processes)def createLogger(quiet, debug, log_file):logger = logging.getLogger()if not quiet:logger.setLevel(logging.INFO)if debug:logger.setLevel(logging.DEBUG)#print (log_file)if log_file:fileHandler = logging.FileHandler(log_file)logger.addHandler(fileHandler)if __name__ == '__main__':main()

WikiExtractor.py(维基百科抽取器)相关推荐

  1. 离线维基百科阅读器Kiwix Serve

    本文软件是网友 刘源 推荐的,因为他已经安装成功了,所以老苏拖拖拉拉的就从去年拖到了现在:

  2. 维基百科简体中文语料训练word2vec词向量

    步骤: 1.下载维基百科中文语料 2.使用工具从压缩包中抽取正文文本 3.将繁体字转简体字 4.分词 5.训练模型 6.测试模型 1.下载维基百科中文语料 语料下载地址:https://dumps.w ...

  3. 用Python3.6来做维基百科中文语料

    首先介绍一下word2vec 参考http://www.cnblogs.com/iloveai/p/word2vec.html 2013年,Google开源了一款用于词向量计算的工具--word2ve ...

  4. 使用中文维基百科训练word2vec模型的最新方法!

    网上看了很多其他博客,发现有些部分都太老旧了,以至于现在套用都错误百出...这里总结了一下使用中文维基百科训练word2vec模型的最新方法. 参考链接: https://blog.csdn.net/ ...

  5. 使用中文维基百科训练word2vec模型

    一.下载原始数据 数据下载地址:https://dumps.wikimedia.org/zhwiki/latest/zhwiki-latest-pages-articles.xml.bz2 ,或者在这 ...

  6. WikiTaxinbsp;离线中英文维基百科数据…

    维基百科 (Wikipedia) 对很多人来说绝对是一个知识的宝库!维基百科拥有海量权威的资料供我们查询,也许我们每个人都梦想着把维基百科下载下来实现离线查询.甚至装在U盘里,以方便随时随地查询.对于 ...

  7. 【wiki维基百科中文数据集】抽取wiki数据集——实操

    参考 [https://blog.csdn.net/wangyangzhizhou/article/details/78348949] [另外一篇参考处理wiki数据] [1][https://blo ...

  8. 面向维基百科的领域知识演化关系抽取

    题目:面向维基百科的领域演化知识关系抽取 期刊:计算机学报 时间:2016 摘要 重点在于领域知识的演化关系 网络数据的多样和无序是用户难以准确有序的获取领域之间的关系,提出一种面向中文维基百科领域知 ...

  9. Kaggle比赛冠军经验分享:如何用 RNN 预测维基百科网络流量

    Kaggle比赛冠军经验分享:如何用 RNN 预测维基百科网络流量 from:https://www.leiphone.com/news/201712/zbX22Ye5wD6CiwCJ.html 导语 ...

  10. windows下使用word2vec训练维基百科中文语料全攻略!(二)

    全文共454个字,3张图,预计阅读时间5分钟. 训练一个聊天机器人的很重要的一步是词向量训练,无论是生成式聊天机器人还是检索式聊天机器人,都需要将文字转化为词向量,时下最火的词向量训练模型是word2 ...

最新文章

  1. 推荐10个安全又有实力的办公软件,极大提升办公效率
  2. 压缩感知(II) A Compressed Sense of Compressive Sensing (II)
  3. 90后,一个即将成为程序员的我
  4. 国家自然科学基金2020年预算减少22亿元
  5. 关于mysql的论文,大家给点建议
  6. Ground Defense 模拟
  7. 07-人脸识别-人脸矫正
  8. 【不建议阅读】电脑上腾讯会议录屏:OBS
  9. 高并发业务接口开发思路(实战)
  10. C语言实现抽签小功能
  11. Anaconda版本与Python版本对应关系
  12. ABB机器人Test指令
  13. 一分钟解决微信小程序截图(截屏问题)
  14. 自主创新高科技IC企业的数字化转型 ——上海达策助力上海芯钛迈向企业发展新赛道
  15. matlab寻峰代码,寻峰的函数!! - 程序语言 - MATLAB/Mathematica - 小木虫论坛-学术科研互动平台...
  16. 2021江苏省南通市高考成绩查询时间,2021南通市安全教育平台登录入口网址【最新】...
  17. Task4 论文种类分类
  18. 嵌入式端音频开发(Unisound篇)之 7.1 蜂鸟M离线语音芯片简介
  19. Java 证书 数字签名_JAVA 给PDF添加数字签名
  20. JAVA面试-系统设计题

热门文章

  1. Android项目:基于安卓Android平台手机商城系统app(计算机毕业设计)
  2. 关于nvidia-smi和nvidia -V即nvidia --verison的命令说明
  3. 白萝卜烘干技术,白萝卜的干燥过程
  4. 网站服务器80,443端口一直被恶意攻击怎么办?
  5. cadz轴归零命令_CAD图形如何Z轴归0?
  6. linux系统windows模拟器下载,Linux开源模拟器Wine 0.9.54版下载
  7. cee怎么把大图片放进小盒子_celine box带子扣安装和打开教程,秒学会赛琳盒子包肩带如何调节长度...
  8. xp无法连接win10计算机,win10共享的打印机xp无法连接
  9. 产品初探(一):面试经验记录
  10. android+日文输入法,玩转手机日语输入法(安卓/iphone)