code
stringlengths
4
4.48k
docstring
stringlengths
1
6.45k
_id
stringlengths
24
24
def duplicate(pos, dupLen, s): <NEW_LINE> <INDENT> start = s[:pos] <NEW_LINE> dup = s[pos: pos + dupLen] <NEW_LINE> end = s[pos + dupLen:] <NEW_LINE> ret = start + (2 * dup) + end <NEW_LINE> return ret
this function duplicates a string s, with duplication length duplen at pos = pos for example s = 000111 duplen =2 pos = 2 gives 00010111
625941bd711fe17d82542269
def create_dataset(self, in_stream): <NEW_LINE> <INDENT> assert XLSX_IMPORT <NEW_LINE> from io import BytesIO <NEW_LINE> xlsx_book = openpyxl.load_workbook(BytesIO(in_stream), read_only=True) <NEW_LINE> dataset = tablib.Dataset() <NEW_LINE> sheet = xlsx_book.active <NEW_LINE> rows = sheet.rows <NEW_LINE> dataset.headers = [cell.value for cell in next(rows)] <NEW_LINE> for row in rows: <NEW_LINE> <INDENT> row_values = [cell.value for cell in row] <NEW_LINE> dataset.append(row_values) <NEW_LINE> <DEDENT> return dataset
Create dataset from first sheet.
625941bde1aae11d1e749bad
def test_table_exists(self, postgres_test_db): <NEW_LINE> <INDENT> assert postgres_test_db.table_exists("recipe_step")
Entry test to make sure the Recipe table exists
625941bd99fddb7c1c9de28a
def UploadFiles(upload_pairs, bucket_ref, gs_prefix=None): <NEW_LINE> <INDENT> storage_client = storage_api.StorageClient() <NEW_LINE> dests = [] <NEW_LINE> checksum = file_utils.Checksum(algorithm=hashlib.sha256) <NEW_LINE> for local_path, _ in upload_pairs: <NEW_LINE> <INDENT> checksum.AddFileContents(local_path) <NEW_LINE> <DEDENT> if gs_prefix is not None: <NEW_LINE> <INDENT> gs_prefix = '/'.join([gs_prefix, checksum.HexDigest()]) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> gs_prefix = checksum.HexDigest() <NEW_LINE> <DEDENT> storage_client = storage_api.StorageClient() <NEW_LINE> dests = [] <NEW_LINE> for local_path, uploaded_path in upload_pairs: <NEW_LINE> <INDENT> obj = storage_client.CopyFileToGCS( bucket_ref, local_path, '/'.join([gs_prefix, uploaded_path])) <NEW_LINE> dests.append('/'.join(['gs:/', obj.bucket, obj.name])) <NEW_LINE> <DEDENT> return dests
Uploads files at the local path to a specifically prefixed location. The prefix is 'cloudmldist/<current timestamp>'. Args: upload_pairs: [(str, str)]. Pairs of absolute paths to local files to upload and corresponding path in Cloud Storage (that goes after the prefix). For example, ('/path/foo', 'bar') will upload '/path/foo' to '<prefix>/bar' in Cloud Storage. bucket_ref: storage_util.BucketReference. Files will be uploaded to this bucket. gs_prefix: str. Prefix to the GCS Path where files will be uploaded. Returns: [str]. A list of fully qualified gcs paths for the uploaded files, in the same order they were provided.
625941bd31939e2706e4cd66
@then("delete all '{elements}' elements") <NEW_LINE> def step(context, elements): <NEW_LINE> <INDENT> if elements == 'collections': <NEW_LINE> <INDENT> first_xpath = '//*[@ng-repeat="coll in vm.colls.colls" ][%s]' '/*[@ ng-show="collectionsEdit"]' <NEW_LINE> second_xpath = '//*[@class="c-btn' ' modal__action-confirm modal__action"]' <NEW_LINE> <DEDENT> context.clickActions = ClickActions(context) <NEW_LINE> context.clickActions.click_all_and_click_button(first_xpath, second_xpath)
Then delete all '{elements}' elements
625941bd56b00c62f0f1454f
def get_best_next_step(self, point: Point, target: Point, length: int, obstacles: str): <NEW_LINE> <INDENT> if point == target: <NEW_LINE> <INDENT> return point <NEW_LINE> <DEDENT> orthogonal = [p for p in get_orthogonally_adjacent(point) if self.is_valid(p, obstacles)] <NEW_LINE> paths = [(p, self.get_closest_targets(p, [target], obstacles)) for p in orthogonal] <NEW_LINE> paths = [(first, answers[0]) for first, answers in paths if answers] <NEW_LINE> return next(first for (first, (last, l)) in paths if l < length and last == target)
Get the best next step given a point and a target (and it has to be < length)
625941bd377c676e912720a2
def update(self, artifact_id, type_name=None, type_version=None, remove_props=None, **kwargs): <NEW_LINE> <INDENT> type_name, type_version = self._check_type_params(type_name, type_version) <NEW_LINE> url = glare_urls['update_get_delete'].format(version=self.version, type_name=type_name, type_version=type_version, artifact_id=artifact_id) <NEW_LINE> hdrs = { 'Content-Type': 'application/openstack-images-v2.1-json-patch'} <NEW_LINE> artifact_obj = self.get(artifact_id, type_name, type_version) <NEW_LINE> changes = [] <NEW_LINE> if remove_props: <NEW_LINE> <INDENT> for prop in remove_props: <NEW_LINE> <INDENT> if prop in ArtifactType.generic_properties: <NEW_LINE> <INDENT> msg = "Generic properties cannot be removed" <NEW_LINE> raise exc.HTTPBadRequest(msg) <NEW_LINE> <DEDENT> if prop not in kwargs: <NEW_LINE> <INDENT> changes.append({'op': 'remove', 'path': '/' + prop}) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> for prop in kwargs: <NEW_LINE> <INDENT> if prop in artifact_obj.generic_properties: <NEW_LINE> <INDENT> op = 'add' if getattr(artifact_obj, prop) is None else 'replace' <NEW_LINE> <DEDENT> elif prop in artifact_obj.type_specific_properties: <NEW_LINE> <INDENT> if artifact_obj.type_specific_properties[prop] is None: <NEW_LINE> <INDENT> op = 'add' <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> op = 'replace' <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> msg = ("Property '%s' doesn't exist in type '%s' with version" " '%s'" % (prop, type_name, type_version)) <NEW_LINE> raise exc.HTTPBadRequest(msg) <NEW_LINE> <DEDENT> changes.append({'op': op, 'path': '/' + prop, 'value': kwargs[prop]}) <NEW_LINE> <DEDENT> resp, body = self.http_client.patch(url, headers=hdrs, data=changes) <NEW_LINE> return ArtifactType(**body)
Update attributes of an artifact. :param artifact_id: ID of the artifact to modify. :param remove_props: List of property names to remove :param \*\*kwargs: Artifact attribute names and their new values.
625941bd7b180e01f3dc46fc
def buildUrl(self, uri): <NEW_LINE> <INDENT> scheme = 'https' <NEW_LINE> netloc = '%s:%s' % (self['conn'].host, self['conn'].port) <NEW_LINE> return urlunparse([scheme, netloc, uri, '', '', ''])
Prepares the remote URL
625941bd097d151d1a222d54
def __init__(self, emitter, callback): <NEW_LINE> <INDENT> self._emitter = emitter <NEW_LINE> self.callback = callback
Initialize a new Event Listener class.
625941bd462c4b4f79d1d5c8
def search(self, query, property_map={}): <NEW_LINE> <INDENT> if not isinstance(query, str): <NEW_LINE> <INDENT> raise TypeError("'query' must be of type 'str'") <NEW_LINE> <DEDENT> if not isinstance(property_map, dict): <NEW_LINE> <INDENT> raise TypeError("'property_map' must be of type 'dict'") <NEW_LINE> <DEDENT> results = [] <NEW_LINE> try: <NEW_LINE> <INDENT> search_result = _SireMol.Select(query)(self._sire_object, property_map) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> msg = "'Invalid search query: %r" % query <NEW_LINE> if _isVerbose(): <NEW_LINE> <INDENT> raise ValueError(msg) from e <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> raise ValueError(msg) from None <NEW_LINE> <DEDENT> <DEDENT> return _SearchResult(search_result)
Search the residue for atoms and residues. Parameters ---------- query : str The search query. property_map : dict A dictionary that maps system "properties" to their user defined values. This allows the user to refer to properties with their own naming scheme, e.g. { "charge" : "my-charge" } Returns ------- results : [:class:`Atom <BioSimSpace._SireWrappers.Atom>`] A list of objects matching the search query. Examples -------- Search for all oxygen or hydrogen atoms. >>> result = residue.search("element oxygen or element hydrogen") Search for atom index 23. >>> result = residue.search("atomidx 23")
625941bd1f5feb6acb0c4a4c
def Clear(self): <NEW_LINE> <INDENT> return _lldb.SBData_Clear(self)
Clear(SBData self)
625941bdd268445f265b4d66
def test_type_is_enveloped(self): <NEW_LINE> <INDENT> pkcs7 = load_pkcs7_data(FILETYPE_PEM, pkcs7Data) <NEW_LINE> self.assertFalse(pkcs7.type_is_enveloped())
L{PKCS7Type.type_is_enveloped} returns C{False} if the PKCS7 object is not of the type I{enveloped}.
625941bd97e22403b379ce91
def load_metadata(self, metadata_file): <NEW_LINE> <INDENT> with open(metadata_file) as meta: <NEW_LINE> <INDENT> header = meta.readline() <NEW_LINE> self.metadata["header"] = header.split("\t")[1:5] <NEW_LINE> for line in meta.readlines(): <NEW_LINE> <INDENT> fields = line.rstrip().split("\t") <NEW_LINE> self.metadata["rows"][fields[0]] = fields[1:5]
Loads the metadata from the give CSV file. :param metadata_file: :return:
625941bd50485f2cf553cc91
def get_battery_sensor_current(self, identifier: int) -> float: <NEW_LINE> <INDENT> return 2.0
Get the current of a battery sensor.
625941bd3346ee7daa2b2c62
def _visualize(self, unnorm_image, class_ids, scores, bounding_boxes): <NEW_LINE> <INDENT> ax = utils.viz.plot_bbox(unnorm_image, bounding_boxes[0], scores[0], class_ids[0], class_names=self._network.classes) <NEW_LINE> fig = plt.gcf() <NEW_LINE> fig.set_size_inches(14, 14) <NEW_LINE> plt.show()
Since the transformed_image is in NCHW layout and the values are normalized, this method slices and transposes to give CHW as required by matplotlib, and scales (-2, +2) to (0, 255) linearly.
625941bdfff4ab517eb2f332
def test_detect_one_dimensional(): <NEW_LINE> <INDENT> assert not Partition(3,3,2).is_one_dimensional() <NEW_LINE> assert not Partition(3,3,3).is_one_dimensional() <NEW_LINE> assert not Partition(11,3,3).is_one_dimensional() <NEW_LINE> assert not Partition(15, 7, 7, 1, 1, 1, 1).is_one_dimensional()
Test for false positives in 1D detection.
625941bd85dfad0860c3ad52
def utcnow(): <NEW_LINE> <INDENT> return datetime.utcnow().replace(tzinfo=pytz.utc)
Return the current UTC date and time as a datetime object with proper tzinfo.
625941bd9b70327d1c4e0ccc
def test_plan_summary_view(request): <NEW_LINE> <INDENT> plans = TestPlan.objects.filter(enabled=True).filter(end_date__gte=datetime.now()).order_by('start_date') <NEW_LINE> return _generate_test_views(request, plans)
Called when viewing active test plans
625941bd56b00c62f0f14550
def testTokensUnicode(self): <NEW_LINE> <INDENT> line = u"Comment ça va?" <NEW_LINE> e = Entry(line) <NEW_LINE> self.assertEqual([u"Comment",u"ça",u"va",u"?"], e.getTokens(line))
Get tokens with accented characters
625941bdd6c5a10208143f40
def __unicode__(self): <NEW_LINE> <INDENT> return self.day_of_week
Return a more human-readable representation
625941bd4f6381625f114935
def export_post(request, style, format=-1): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> payload = request.get_json(force=True) <NEW_LINE> <DEDENT> except: <NEW_LINE> <INDENT> payload = dict(request.form) <NEW_LINE> <DEDENT> if not payload: <NEW_LINE> <INDENT> return {'error': 'no information received'}, 400 <NEW_LINE> <DEDENT> if 'bibcode' not in payload: <NEW_LINE> <INDENT> return {'error': 'no bibcode found in payload (parameter name is `bibcode`)'}, 400 <NEW_LINE> <DEDENT> if 'sort' in payload: <NEW_LINE> <INDENT> sort = read_value_list_or_not(payload, 'sort') <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> sort = 'date desc, bibcode desc' <NEW_LINE> <DEDENT> bibcodes = payload['bibcode'] <NEW_LINE> if format == -1: <NEW_LINE> <INDENT> current_app.logger.info('received request with bibcodes={bibcodes} to export in {style} style using sort order={sort}'. format(bibcodes=','.join(bibcodes), style=style, sort=sort)) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> current_app.logger.info('received request with bibcodes={bibcodes} to export in {style} style with output format {format} using sort order={sort}'. format(bibcodes=','.join(bibcodes), style=style, format=format, sort=sort)) <NEW_LINE> <DEDENT> if current_app.config['EXPORT_SERVICE_TEST_BIBCODE_GET'] == bibcodes: <NEW_LINE> <INDENT> return solrdata.data, 200 <NEW_LINE> <DEDENT> return get_solr_data(bibcodes=bibcodes, fields=default_solr_fields(), sort=sort, encode_style=adsFormatter().native_encoding(format)), 200
:param request: :param style: :param format: :return:
625941bd925a0f43d2549d6c
def get_emails(self): <NEW_LINE> <INDENT> headers, data = self._requester.requestJsonAndCheck("GET", "/user/emails") <NEW_LINE> itemdata = namedtuple("EmailData", data[0].keys()) <NEW_LINE> return [itemdata._make(item.values()) for item in data]
:calls: `GET /user/emails <http://docs.github.com/en/rest/reference/users#emails>`_ :rtype: list of namedtuples with members email, primary, verified and visibility
625941bd090684286d50ebda
def associamedia(): <NEW_LINE> <INDENT> def on_accept_suppoform(form): <NEW_LINE> <INDENT> session.flash = "new media added" <NEW_LINE> session.media_id = form.vars.id <NEW_LINE> <DEDENT> if session.movie_id: <NEW_LINE> <INDENT> db.formato.film.default = session.movie_id <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> raise HTTP(404,'No movie specified') <NEW_LINE> <DEDENT> if session.media_id: <NEW_LINE> <INDENT> db.formato.supporto.default = session.media_id <NEW_LINE> return dict(form=crud.create(db.formato,next='film/%s' % session.movie_id)) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> supporti = db(db.supporto.id>0).select() <NEW_LINE> if supporti: <NEW_LINE> <INDENT> db.formato.supporto.default = supporti.last().id <NEW_LINE> <DEDENT> suppoform = crud.create(db.supporto,next='associamedia',fields=['tipo','collocazione'],onaccept=on_accept_suppoform) <NEW_LINE> my_extra_element = A('Nuova collocazione',_href=URL('collocazione','aggiungi',user_signature=True)) <NEW_LINE> suppoform[0].insert(-1,my_extra_element) <NEW_LINE> return dict(form=crud.create(db.formato,next='film/%s' % session.movie_id),suppoform=suppoform)
Add new relation between movie_id and media_id on formato table
625941bd1d351010ab855a15
def distance(self, ts1, ts2): <NEW_LINE> <INDENT> sax = self.transform([ts1, ts2]) <NEW_LINE> return self.distance_sax(sax[0], sax[1])
Compute distance between SAX representations as defined in [1]_. Parameters ---------- ts1 : array-like A time series ts2 : array-like Another time series Returns ------- float SAX distance References ---------- .. [1] J. Lin, E. Keogh, L. Wei, et al. Experiencing SAX: a novel symbolic representation of time series. Data Mining and Knowledge Discovery, 2007. vol. 15(107)
625941bd3cc13d1c6d3c7274
@cli.command() <NEW_LINE> @click.argument('name') <NEW_LINE> @click.option('--unsafe-import-token', help='Private key to import to wallet (unsafe, unless shell history is deleted afterwards)') <NEW_LINE> def addtoken(name, unsafe_import_token): <NEW_LINE> <INDENT> stm = shared_dpay_instance() <NEW_LINE> if stm.rpc is not None: <NEW_LINE> <INDENT> stm.rpc.rpcconnect() <NEW_LINE> <DEDENT> if not unlock_wallet(stm): <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> if not unsafe_import_token: <NEW_LINE> <INDENT> unsafe_import_token = click.prompt("Enter private token", confirmation_prompt=False, hide_input=True) <NEW_LINE> <DEDENT> stm.wallet.addToken(name, unsafe_import_token) <NEW_LINE> set_shared_dpay_instance(stm)
Add key to wallet When no [OPTION] is given, a password prompt for unlocking the wallet and a prompt for entering the private key are shown.
625941bd4527f215b584c353
def test_nullable_object(self): <NEW_LINE> <INDENT> var_a = Variable("A") <NEW_LINE> var_b = Variable("B") <NEW_LINE> ter_a = Terminal("a") <NEW_LINE> ter_b = Terminal("b") <NEW_LINE> start = Variable("S") <NEW_LINE> prod0 = Production(start, [var_a, var_b]) <NEW_LINE> prod1 = Production(var_a, [ter_a, var_a, var_a]) <NEW_LINE> prod2 = Production(var_a, [Epsilon()]) <NEW_LINE> prod3 = Production(var_b, [ter_b, var_b, var_b]) <NEW_LINE> prod4 = Production(var_b, [Epsilon()]) <NEW_LINE> cfg = CFG({var_a, var_b, start}, {ter_a, ter_b}, start, {prod0, prod1, prod2, prod3, prod4}) <NEW_LINE> self.assertEqual(cfg.get_nullable_symbols(), {var_a, var_b, start})
Tests the finding of nullable objects
625941bd10dbd63aa1bd2a9f
def kthSmallest(self, matrix, k): <NEW_LINE> <INDENT> lo = matrix[0][0] <NEW_LINE> hi = matrix[-1][-1] <NEW_LINE> while lo <= hi: <NEW_LINE> <INDENT> mid = lo + ((hi - lo) >> 1) <NEW_LINE> count = _count_smaller_or_equal_numbers(mid, matrix) <NEW_LINE> if count < k: <NEW_LINE> <INDENT> lo = mid + 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> hi = mid - 1 <NEW_LINE> <DEDENT> <DEDENT> return lo
:type matrix: List[List[int]] :type k: int :rtype: int
625941bdfbf16365ca6f60b6
def get_next_book_id(self): <NEW_LINE> <INDENT> self.increment_last_book_id() <NEW_LINE> return self.__last_book_id
Function to return the next valid ID for a book in the repository. :return: integer, next valid ID for a book.
625941bd4c3428357757c222
def GetNumberOfPoints(self): <NEW_LINE> <INDENT> return _itkPointBasedSpatialObjectPython.itkPointBasedSpatialObject2_GetNumberOfPoints(self)
GetNumberOfPoints(self) -> unsigned long
625941bd21a7993f00bc7be4
def testCase921(self): <NEW_LINE> <INDENT> global configdata <NEW_LINE> assert configdata.data["phoneNumber"][0]["type"] == jval["phoneNumber"][0]["type"] <NEW_LINE> assert configdata.data["phoneNumber"][0]["number"] == jval["phoneNumber"][0]["number"] <NEW_LINE> pass
Check 'phoneNumber' loaded from JSON data file: -> configdata.data["phoneNumber"]'.
625941bd627d3e7fe0d68d47
def format_config_error(e: ConfigError) -> Iterator[str]: <NEW_LINE> <INDENT> yield "Error in configuration" <NEW_LINE> if e.path: <NEW_LINE> <INDENT> yield " at '%s'" % (".".join(e.path),) <NEW_LINE> <DEDENT> yield ":\n %s" % (e.msg,) <NEW_LINE> e = e.__cause__ <NEW_LINE> indent = 1 <NEW_LINE> while e: <NEW_LINE> <INDENT> indent += 1 <NEW_LINE> yield ":\n%s%s" % (" " * indent, str(e)) <NEW_LINE> e = e.__cause__
Formats a config error neatly The idea is to format the immediate error, plus the "causes" of those errors, hopefully in a way that makes sense to the user. For example: Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template': Failed to parse config for module 'JinjaOidcMappingProvider': invalid jinja template: unexpected end of template, expected 'end of print statement'. Args: e: the error to be formatted Returns: An iterator which yields string fragments to be formatted
625941bd21bff66bcd68484d
def reset(self, _c_name): <NEW_LINE> <INDENT> _maybe_ins_obj, is_ins = self.__get_probable_instance_or_class(_c_name) <NEW_LINE> if is_ins: <NEW_LINE> <INDENT> __get_payload = self.__get_payload(_c_name) <NEW_LINE> if len(__get_payload): <NEW_LINE> <INDENT> self.__set_val_into_class(__get_payload, _maybe_ins_obj.__init__) <NEW_LINE> <DEDENT> return _maybe_ins_obj
重置对象
625941bd4428ac0f6e5ba6ea
def test_device_enrollment_single(self): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> device_enrollment = DeviceEnrollment( enrollment_identity = "A-4E:63:2D:AE:14:BC:D1:09:77:21:95:44:ED:34:06:57:1E:03:B1:EF:0E:F2:59:44:71:93:23:22:15:43:23:12") <NEW_LINE> device_enrollment.create() <NEW_LINE> <DEDENT> except ApiErrorResponse as api_error: <NEW_LINE> <INDENT> self.assertEqual(api_error.status_code, 409, "This should be a duplicate identity")
Example of enrolling a device in Pelion Device Management.
625941bd01c39578d7e74d34
def get_factor(self, p_gain, gain): <NEW_LINE> <INDENT> if p_gain < 1 and p_gain >= -1: <NEW_LINE> <INDENT> gain += p_gain <NEW_LINE> <DEDENT> elif p_gain >= 3 and p_gain < 4: <NEW_LINE> <INDENT> val = 100 + (p_gain - 3) * 900 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain >= -2 and p_gain < -1: <NEW_LINE> <INDENT> val = -1 <NEW_LINE> val = val - (abs(p_gain) - 1) * 9 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain < -3 and p_gain >=-4: <NEW_LINE> <INDENT> val = -100 <NEW_LINE> val = val - (abs(p_gain) - 3) * 900 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain >= 1 and p_gain < 2: <NEW_LINE> <INDENT> val = 1 <NEW_LINE> val = val + (p_gain - 1) * 9 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain >= 4: <NEW_LINE> <INDENT> val = 1000 <NEW_LINE> val = val + (p_gain - 4) * 9000 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain >= 2 and p_gain < 3: <NEW_LINE> <INDENT> val = 10 <NEW_LINE> val = val + (p_gain - 2) * 90 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain < -4: <NEW_LINE> <INDENT> val = -1000 <NEW_LINE> val = val - (abs(p_gain) - 4) * 9000 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> elif p_gain >= -3 and p_gain < -2: <NEW_LINE> <INDENT> val = -10 <NEW_LINE> val = val - (abs(p_gain) - 2) * 90 <NEW_LINE> gain += val <NEW_LINE> <DEDENT> return gain
Rate of getting utility
625941bd8a43f66fc4b53f61
def delete(self): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> os.remove(g_prefs.get_principal_path()) <NEW_LINE> <DEDENT> except OSError as error: <NEW_LINE> <INDENT> log_print("Error deleting principal cache: " + str(error)) <NEW_LINE> raise <NEW_LINE> <DEDENT> self.principal = None
Deletes cache file and removes from memory
625941bd91af0d3eaac9b90e
def createContextFromJson(self, pathtoJsonFiles, showContext=False, printToFile=SAVE_CONTEXT): <NEW_LINE> <INDENT> if not (isinstance(pathtoJsonFiles, str) or isinstance(pathtoJsonFiles, list)): <NEW_LINE> <INDENT> raise ValueError('createContextFromJson requires either a path for the folder of json files or a list ' 'with the path of the files to be added') <NEW_LINE> return [] <NEW_LINE> <DEDENT> if not self.initmeplease(): <NEW_LINE> <INDENT> warnings.warn('createContextFromJson>> Object not initiated') <NEW_LINE> return [] <NEW_LINE> <DEDENT> files_processed = [] <NEW_LINE> if isinstance(pathtoJsonFiles, str): <NEW_LINE> <INDENT> filepaths = self.getFiles(pathtoJsonFiles, {}, ['.json'], False) <NEW_LINE> if len(filepaths)>0: filepaths = filepaths.get('.json') <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> filepaths = pathtoJsonFiles <NEW_LINE> <DEDENT> if filepaths is not None and len(filepaths)>0: <NEW_LINE> <INDENT> for filepath in filepaths: <NEW_LINE> <INDENT> filename = ntpath.basename(filepath) <NEW_LINE> print(filename) <NEW_LINE> with open(filepath) as df: <NEW_LINE> <INDENT> data = json.load(df, object_hook=Context.as_context) <NEW_LINE> df.close() <NEW_LINE> <DEDENT> if isinstance(data, list): <NEW_LINE> <INDENT> for d in data: <NEW_LINE> <INDENT> if isinstance(d, Anything): <NEW_LINE> <INDENT> print(d) <NEW_LINE> d.addRelatedFiles(filename) <NEW_LINE> self.addItem(d) <NEW_LINE> files_processed.append(filepath) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> elif isinstance(data, Anything): <NEW_LINE> <INDENT> print(data) <NEW_LINE> data.addRelatedFiles(filename) <NEW_LINE> self.addItem(data) <NEW_LINE> files_processed.append(filepath) <NEW_LINE> <DEDENT> if data is None or ((isinstance(data, list) or isinstance(data, dict)) and len(data) == 0): <NEW_LINE> <INDENT> print("No Item info found in:", filepath) <NEW_LINE> <DEDENT> <DEDENT> self.cannibalism() <NEW_LINE> self.save_context(printToFile) <NEW_LINE> if showContext: <NEW_LINE> <INDENT> self.showContext() <NEW_LINE> <DEDENT> files_processed = list(set(files_processed)) <NEW_LINE> return files_processed <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> warnings.warn('createContextFromJson>> Context not created, there are no files to be analysed') <NEW_LINE> return []
This method initialises the context :param pathtoJsonFiles: path to the list of .json files, or, instead a list with the path of the files :param showContext: if False (default) shows the context once the method finalises :param printToFile: file to save the context :return: The list of files processed (if any)
625941bdb57a9660fec3377a
def mergeSeveralFoldsGeom(data): <NEW_LINE> <INDENT> return stats.mstats.gmean(data).tolist()
Creates an ensemble using the geometric mean results of several CV folds. Parameters ---------- data: List The data from which the mean is to be found. Returns ------- List The predictions made by the ensemble for each image.
625941bd7047854f462a1305
def _wrap_method(method, wsdl_name, arg_processor, result_processor, usage): <NEW_LINE> <INDENT> icontrol_sig = "iControl signature: %s" % _method_string(method) <NEW_LINE> if usage: <NEW_LINE> <INDENT> usage += "\n\n%s" % icontrol_sig <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> usage = "Wrapper for %s.%s\n\n%s" % ( wsdl_name, method.method.name, icontrol_sig) <NEW_LINE> <DEDENT> def wrapped_method(*args, **kwargs): <NEW_LINE> <INDENT> log.debug('Executing iControl method: %s.%s(%s, %s)', wsdl_name, method.method.name, args, kwargs) <NEW_LINE> args, kwargs = arg_processor.process(args, kwargs) <NEW_LINE> try: <NEW_LINE> <INDENT> result = method(*args, **kwargs) <NEW_LINE> <DEDENT> except AttributeError: <NEW_LINE> <INDENT> raise ConnectionError('iControl call failed, possibly invalid ' 'credentials.') <NEW_LINE> <DEDENT> except _MethodNotFound as e: <NEW_LINE> <INDENT> e.__class__ = MethodNotFound <NEW_LINE> raise <NEW_LINE> <DEDENT> except WebFault as e: <NEW_LINE> <INDENT> e.__class__ = ServerError <NEW_LINE> raise <NEW_LINE> <DEDENT> except URLError as e: <NEW_LINE> <INDENT> raise ConnectionError('URLError: %s' % str(e)) <NEW_LINE> <DEDENT> except BadStatusLine as e: <NEW_LINE> <INDENT> raise ConnectionError('BadStatusLine: %s' % e) <NEW_LINE> <DEDENT> except SAXParseException as e: <NEW_LINE> <INDENT> raise ParseError("Failed to parse the BIGIP's response. This " "was likely caused by a 500 error message.") <NEW_LINE> <DEDENT> return result_processor.process(result) <NEW_LINE> <DEDENT> wrapped_method.__doc__ = usage <NEW_LINE> wrapped_method.__name__ = str(method.method.name) <NEW_LINE> wrapped_method._method = method <NEW_LINE> return wrapped_method
This function wraps a suds method and returns a new function which provides argument/result processing. Each time a method is called, the incoming args will be passed to the specified arg_processor before being passed to the suds method. The return value from the underlying suds method will be passed to the specified result_processor prior to being returned to the caller. @param method: A suds method (can be obtained via client.service.<method_name>). @param arg_processor: An instance of L{_ArgProcessor}. @param result_processor: An instance of L{_ResultProcessor}.
625941bdf548e778e58cd475
def twoSum(self, nums, target): <NEW_LINE> <INDENT> i =0 <NEW_LINE> twosum_list=[] <NEW_LINE> while i < len(nums): <NEW_LINE> <INDENT> j = 1 <NEW_LINE> while j <= len(nums) -i -1: <NEW_LINE> <INDENT> if nums[i]+nums[i+j]==target: <NEW_LINE> <INDENT> twosum_list.append(i) <NEW_LINE> twosum_list.append(i+j) <NEW_LINE> <DEDENT> j=j+1 <NEW_LINE> <DEDENT> i = i+1 <NEW_LINE> <DEDENT> return twosum_list
:type nums: List[int] :type target: int :rtype: List[int]
625941bd1b99ca400220a9aa
def __init__(self,x=0 , y =0): <NEW_LINE> <INDENT> self.x = x <NEW_LINE> self.y = y
Initalizes the position of a new point. If they are not specified, the points defaults to the origin :param x: x coordinate :param y: y coordinate
625941bd8e71fb1e9831d6a3
def _rotate(self, angle): <NEW_LINE> <INDENT> pass
Turn turtle counterclockwise by specified angle if angle > 0.
625941bd0fa83653e4656eb5
def _register_user(self, form): <NEW_LINE> <INDENT> user = form.save() <NEW_LINE> if getattr(settings, 'OSCAR_SEND_REGISTRATION_EMAIL', True): <NEW_LINE> <INDENT> self.send_registration_email(user) <NEW_LINE> <DEDENT> user_registered.send_robust(sender=self, user=user) <NEW_LINE> try: <NEW_LINE> <INDENT> user = authenticate( username=user.email, password=form.cleaned_data['password1']) <NEW_LINE> <DEDENT> except User.MultipleObjectsReturned: <NEW_LINE> <INDENT> users = User.objects.filter(email=user.email) <NEW_LINE> user = users[0] <NEW_LINE> for u in users[1:]: <NEW_LINE> <INDENT> u.delete() <NEW_LINE> <DEDENT> <DEDENT> auth_login(self.request, user) <NEW_LINE> if self.request.session.test_cookie_worked(): <NEW_LINE> <INDENT> self.request.session.delete_test_cookie()
Register a new user from the data in *form*. If ``OSCAR_SEND_REGISTRATION_EMAIL`` is set to ``True`` a registration email will be send to the provided email address. A new user account is created and the user is then logged in.
625941bd1f5feb6acb0c4a4d
def test_single_span_stacked_literal(self): <NEW_LINE> <INDENT> text = "This is a test!" <NEW_LINE> pattern = re.compile(r"(.+?)(test)(!)") <NEW_LINE> expand = bre.compile_replace(pattern, r'Test \l\Cstacked\E\3') <NEW_LINE> results = expand(pattern.match(text)) <NEW_LINE> self.assertEqual('Test STACKED!', results)
Test single backslash before a single case reference before a literal.
625941bd63d6d428bbe443e8
def hs_func(): <NEW_LINE> <INDENT> def hs(x): <NEW_LINE> <INDENT> if x > 0: <NEW_LINE> <INDENT> return 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return 0 <NEW_LINE> <DEDENT> <DEDENT> return np.frompyfunc(hs, 1, 1)
define a ufunc to caculate hs
625941bd2eb69b55b151c7a5
def codomain(self): <NEW_LINE> <INDENT> return self._codomain
Codomain of ``self``: this is a graph.
625941bd45492302aab5e1b9
def __init__(self, type_robot, type_processor, program_file_extension, def_program_template): <NEW_LINE> <INDENT> self.type_robot = type_robot <NEW_LINE> self.type_processor = type_processor <NEW_LINE> self.program_file_extension = program_file_extension <NEW_LINE> self.program_directory = self._get_program_directory() <NEW_LINE> self.program_template_name = self._get_program_name( default=mimic_config.Prefs.get('DEFAULT_TEMPLATE_NAME')) <NEW_LINE> self.program_output_name = self._get_program_name( default=mimic_config.Prefs.get('DEFAULT_OUTPUT_NAME')) <NEW_LINE> self.default_program = def_program_template
Initialize generic processor. This function sets PostProcessor types and a few default parameters. Subclasses should implement the following in their own initialization functions: super(*subclass, self).__init__(*args) :param type_robot: Type of robot supported by this processor :param type_processor: Type of this processor :param program_file_extension: Type of the output file
625941bd7b25080760e39353
def mcAbs(self, absPosition, moveVel=0.5): <NEW_LINE> <INDENT> if self.verbose: print ('mcAbs '), absPosition, c_float(absPosition), 'mVel', moveVel <NEW_LINE> if not self.Connected: <NEW_LINE> <INDENT> raise Exception('Please connect first! Use initializeHardwareDevice') <NEW_LINE> <DEDENT> minVel, acc, maxVel = self.getVelocityParameters() <NEW_LINE> self.setVel(moveVel) <NEW_LINE> self.mAbs(absPosition) <NEW_LINE> self.setVel(maxVel) <NEW_LINE> if self.verbose: print ('mcAbs SUCESS') <NEW_LINE> return True
Moves the motor to the Absolute position specified at a controlled velocity absPosition float Position desired moveVel float Motor velocity, mm/sec
625941bd32920d7e50b280c6
def __init__(self, nanoseconds=0, microseconds=0, milliseconds=0, seconds=0, minutes=0, hours=0, days=0, weeks=0, months=0, years=0): <NEW_LINE> <INDENT> self.nanoseconds = int(round(nanoseconds + microseconds * self.MICROSECONDS + milliseconds * self.MILLISECONDS + seconds * self.SECONDS + minutes * self.MINUTES + hours * self.HOURS + days * self.DAYS + weeks * self.WEEKS + months * self.MONTHS + years * self.YEARS))
Initialise a new Duration instance. The instance will be made to represent the closest nanosecond to the sum of all of the arguments. :param nanoseconds: The number of nanoseconds to represent. :param microseconds: The number of microseconds to represent. :param milliseconds: The number of milliseconds to represent. :param seconds: The number of seconds to represent. :param minutes: The number of minutes to represent. :param hours: The number of hours to represent. :param days: The number of days to represent. :param weeks: The number of weeks to represent. :param months: The number of months to represent. :param years: The number of years to represent.
625941bda05bb46b383ec71d
def test_tee_sample(self): <NEW_LINE> <INDENT> items = iter(range(1000)) <NEW_LINE> sample = take_sample(items, 50) <NEW_LINE> sample1, sample2 = itertools.tee(sample)
It tests that tee and sample behave ok together
625941bdd7e4931a7ee9de16
def __init__( self, sensor, gas_measurement=False, burn_in_time=300, hum_baseline=40, hum_weighting=25, ): <NEW_LINE> <INDENT> self.sensor_data = BME680Handler.SensorData() <NEW_LINE> self._sensor = sensor <NEW_LINE> self._gas_sensor_running = False <NEW_LINE> self._hum_baseline = hum_baseline <NEW_LINE> self._hum_weighting = hum_weighting <NEW_LINE> self._gas_baseline = None <NEW_LINE> if gas_measurement: <NEW_LINE> <INDENT> threading.Thread( target=self._run_gas_sensor, kwargs={"burn_in_time": burn_in_time}, name="BME680Handler_run_gas_sensor", ).start() <NEW_LINE> <DEDENT> self.update(first_read=True)
Initialize the sensor handler.
625941bdfff4ab517eb2f333
def FGGetNodeList(*args): <NEW_LINE> <INDENT> return _FireGrab_swig.FGGetNodeList(*args)
FGGetNodeList(FGNODEINFO pInfo, UINT32 MaxCnt) -> UINT32
625941bd9f2886367277a789
def nearest(row, geom_union, df2, geom1_col='geometry', geom2_col='geometry', ser_column=None): <NEW_LINE> <INDENT> nearest = df2[geom2_col] == nearest_points(row[geom1_col], geom_union)[1] <NEW_LINE> value = df2[nearest][ser_column].get_values()[0] <NEW_LINE> return value
Finds the closest point from the set of points. Parameters ---------- geom_union: shapely.MultiPoint
625941bdcb5e8a47e48b79a7
def __init__(self, ens1, ens2=None, sdObj = None): <NEW_LINE> <INDENT> EnsOperator.__init__(self) <NEW_LINE> self.ens1 = ens1 <NEW_LINE> self.ens1.stackit() <NEW_LINE> if not hasattr(self.ens1, 'listvar'): <NEW_LINE> <INDENT> self.ens1.listvar = list(self.ens1.stack.keys()) <NEW_LINE> <DEDENT> if ens2 is not None: <NEW_LINE> <INDENT> self.ens2 = ens2 <NEW_LINE> self.ens2.stackit() <NEW_LINE> if not hasattr(self.ens2, 'listvar'): <NEW_LINE> <INDENT> self.ens2.listvar = list(self.ens2.stack.keys()) <NEW_LINE> <DEDENT> <DEDENT> if sdObj is not None: <NEW_LINE> <INDENT> self.sdObj = sdObj <NEW_LINE> self.sdObj.load()
Constructor
625941bd60cbc95b062c6443
def TABLE(self): <NEW_LINE> <INDENT> return _trellis.trellis_viterbi_combined_ib_sptr_TABLE(self)
TABLE(self) -> __dummy_3__
625941bd30dc7b7665901862
@candidate.route('/api/candidate/<_id>/<status>', methods=['POST']) <NEW_LINE> @login_required <NEW_LINE> def api_catdidate_type_ready(_id, status): <NEW_LINE> <INDENT> session = Session() <NEW_LINE> session.query(Person).filter_by(id=_id).update({"type_": status}) <NEW_LINE> session.commit() <NEW_LINE> session.close() <NEW_LINE> return 'ok'
Зміна типу працівника (активний, відхилений, резерв). :param _id: id кандидата :param status: ready/reserv/rejected
625941bdadb09d7d5db6c68b
def _unlock(self): <NEW_LINE> <INDENT> pass
Unlock the config DB.
625941bd091ae35668666e5d
def freguesia(self) -> str: <NEW_LINE> <INDENT> return self.random_element(self.freguesias)
:example 'Miranda do Douro'
625941bdf7d966606f6a9efa
def check_contour(points): <NEW_LINE> <INDENT> return points is not None and len(points) != 0 and len(extract_ndarray(points)) != 0
Валидация множества точек @param points: множество точек :return: True, False
625941bd097d151d1a222d55
def getInfoFromBytes(self, isRequest, rawBytes): <NEW_LINE> <INDENT> if isRequest: <NEW_LINE> <INDENT> return self.burpHelper.analyzeRequest(rawBytes) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return self.burpHelper.analyzeResponse(rawBytes)
Process request or response from raw bytes. Returns IRequestInfo or IResponseInfo respectively. Use getInfo instead if you have access to an IHttpRequestResponse object. It allows you to use all methods like IRequestInfo.getUrl() later.
625941bdad47b63b2c509e7a
def middleNode(self, head): <NEW_LINE> <INDENT> list = [] <NEW_LINE> pointer1 = head <NEW_LINE> while(pointer1 != None): <NEW_LINE> <INDENT> list.append(pointer1) <NEW_LINE> pointer1 = pointer1.next <NEW_LINE> <DEDENT> if len(list)%2 == 0: <NEW_LINE> <INDENT> return list[int(round(len(list)/2))] <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return list[int(round((len(list)-1)/2))]
:type head: ListNode :rtype: ListNode
625941bdd8ef3951e3243437
def get(self): <NEW_LINE> <INDENT> response = {} <NEW_LINE> response[API_ENVELOPE] = {} <NEW_LINE> for state in ['mobius-queue', 'mobius-processing']: <NEW_LINE> <INDENT> data = Schedule2.query .join(Role) .join(Location) .join(Organization) .filter(Schedule2.state == state, Organization.active == True) .all() <NEW_LINE> response[API_ENVELOPE][state] = map( lambda schedule: marshal(schedule, tasking_schedule_fields), data) <NEW_LINE> <DEDENT> return response
Return all queued calculations
625941bd6aa9bd52df036c9c
def mkdirs(path): <NEW_LINE> <INDENT> from xbmcvfs import mkdirs as vfsmkdirs <NEW_LINE> log(3, "Recursively create directory '{path}'.", path=path) <NEW_LINE> return vfsmkdirs(path)
Create directory including parents (using xbmcvfs)
625941bd004d5f362079a22f
def watch_file(self, fd, callback): <NEW_LINE> <INDENT> self._loop.add_reader(fd, callback) <NEW_LINE> return fd
Call callback() when fd has some data to read. No parameters are passed to callback. Returns a handle that may be passed to remove_watch_file() fd -- file descriptor to watch for input callback -- function to call when input is available
625941bdf9cc0f698b1404f7
def _check_all_metadata_found(metadata, items): <NEW_LINE> <INDENT> for name in metadata: <NEW_LINE> <INDENT> seen = False <NEW_LINE> for sample in items: <NEW_LINE> <INDENT> if isinstance(name, (tuple, list)): <NEW_LINE> <INDENT> if sample["files"][0].find(name[0]) > -1: <NEW_LINE> <INDENT> seen = True <NEW_LINE> <DEDENT> <DEDENT> elif sample['files'][0].find(name) > -1: <NEW_LINE> <INDENT> seen = True <NEW_LINE> <DEDENT> <DEDENT> if not seen: <NEW_LINE> <INDENT> print("WARNING: sample not found %s" % str(name))
Print warning if samples in CSV file are missing in folder
625941bd009cb60464c632ad
def _timestamp(): <NEW_LINE> <INDENT> moment = time.time() <NEW_LINE> moment_us = repr(moment).split(".")[1] <NEW_LINE> return time.strftime("%Y-%m-%d-%H-%M-%S-{}".format(moment_us), time.gmtime(moment))
Return a timestamp with microsecond precision.
625941bd8e7ae83300e4aec5
def _encode_set(o): <NEW_LINE> <INDENT> return _encode_with_prefix(SET_PREFIX, "(%s)" % (_encode_list(list(o)).replace('"', PLACEHOLDER)))
Get a string representation of a set with its encoded content.
625941bd7d43ff24873a2b97
def is_balanced(self): <NEW_LINE> <INDENT> return self.__is_balanced(self)
Check if tree is balanced. The heights of two sub trees of any node doesn't differ by more than 1
625941bdbf627c535bc130c8
def GenerateHash(self): <NEW_LINE> <INDENT> return key.Hash(self.GenerateHashBase())
Call ``crypt.key.Hash`` to create a hash code for that ``packet``.
625941bdc432627299f04b3d
@utils.no_error <NEW_LINE> def test_initialize_gleantest(): <NEW_LINE> <INDENT> GleanTest('name')
test.gleantest.GleanTest: class initializes correctly
625941bd8a43f66fc4b53f62
def assert_verify_backup_data(self, host, data_type): <NEW_LINE> <INDENT> self.test_helper.verify_data(data_type, host)
In order for this to work, the corresponding datastore 'helper' class should implement the 'verify_actual_data' method.
625941bddd821e528d63b0a4
def convertToTitle(self, n): <NEW_LINE> <INDENT> result, dvd = "", n <NEW_LINE> while dvd > 0: <NEW_LINE> <INDENT> result += chr((dvd - 1) % 26 + ord("A")) <NEW_LINE> dvd = (dvd - 1) / 26 <NEW_LINE> <DEDENT> return result[::-1]
:type n: int :rtype: str
625941bd4e4d5625662d42d5
def check_taxonomies(self, taxonomies): <NEW_LINE> <INDENT> orphans = set(self.taxonomies) - set(taxonomies) <NEW_LINE> if orphans: <NEW_LINE> <INDENT> msg = ('The following taxonomies are in the exposure model ' 'but not in the fragility model: %s' % sorted(orphans)) <NEW_LINE> if self.rc.taxonomies_from_model: <NEW_LINE> <INDENT> self.taxonomies = dict((t, self.taxonomies[t]) for t in self.fragility_functions if t in self.taxonomies) <NEW_LINE> logs.LOG.warn(msg) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> raise RuntimeError(msg)
:param taxonomies: taxonomies coming from the fragility/vulnerability model If the model has less taxonomies than the exposure raises an error unless the parameter ``taxonomies_from_model`` is set.
625941bdfb3f5b602dac358a
def setSources(self, files, tab=None): <NEW_LINE> <INDENT> tab = tab or self.currTab <NEW_LINE> prevPath = tab.getCurrentPath() <NEW_LINE> for path in files: <NEW_LINE> <INDENT> self.setSource(utils.expandUrl(path, prevPath), newTab=True, tab=tab)
Open multiple files in new tabs. :Parameters: files : `list` List of string-based paths to open tab : `BrowserTab` | None Tab this may be opening from. Useful for path expansion.
625941bd851cf427c661a40c
def get_ground_set() -> _structure.Structure: <NEW_LINE> <INDENT> return _structure.CartesianProduct(_structure.GenesisSetM(), _structure.GenesisSetM())
Return the :term:`ground set` of this :term:`algebra`.
625941bd046cf37aa974cc44
def isSequential(self): <NEW_LINE> <INDENT> return bool()
bool QLocalSocket.isSequential()
625941bd6fece00bbac2d636
def get_state(self): <NEW_LINE> <INDENT> raise NotImplementedError()
Return the value, or state of the object, held by this cell.
625941bd566aa707497f446f
def purge(self): <NEW_LINE> <INDENT> t = time.time() <NEW_LINE> expired = [] <NEW_LINE> for address, worker in self.queue.items(): <NEW_LINE> <INDENT> if t > worker.expiry: <NEW_LINE> <INDENT> expired.append(address) <NEW_LINE> <DEDENT> <DEDENT> for address in expired: <NEW_LINE> <INDENT> print(f"W: Idle worker expired: {address}") <NEW_LINE> self.queue.pop(address, None)
Look for & kill expired workers.
625941bda8ecb033257d2fc8
def resnet152(pretrained=False, **kwargs): <NEW_LINE> <INDENT> model = ResNet('resnet152', Bottleneck, [3, 8, 36, 3], **kwargs) <NEW_LINE> if pretrained: <NEW_LINE> <INDENT> model.load_state_dict(model_zoo.load_url(model_urls['resnet152'])) <NEW_LINE> <DEDENT> return model
Constructs a ResNet-152 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
625941bdbe383301e01b5386
def save_resultsToExcel(self, df, sheet_name, feature, freq=''): <NEW_LINE> <INDENT> book = load_workbook(self.fileName) <NEW_LINE> writer = pd.ExcelWriter(self.fileName, engine='openpyxl') <NEW_LINE> writer.book = book <NEW_LINE> writer.sheets = dict((ws.title, ws) for ws in book.worksheets) <NEW_LINE> df.to_excel(writer,startcol=1 ,startrow=3, sheet_name=sheet_name) <NEW_LINE> sheetname = book.get_sheet_by_name(sheet_name) <NEW_LINE> sheetname['D2'] = feature <NEW_LINE> sheetname['E2'] = freq <NEW_LINE> writer.save() <NEW_LINE> book.close() <NEW_LINE> print("\nResults saved to the worksheet")
-save dataframe to a sheet and statically write something in one cell +takes dataframe handler, sheet_name in which that dataframe to be stored, feature for writing in a single cell and fileName is the workbook-excel file
625941bdf8510a7c17cf95f5
def set_relation(self, from_key, to_key): <NEW_LINE> <INDENT> row = dict(from_to_ticket_id = '{}-{}'.format(from_key, to_key), from_key = int(from_key), to_key = int(to_key)) <NEW_LINE> self._table.upsert(row, ['from_to_ticket_id'])
Redmine チケットリレーションの登録
625941bd004d5f362079a230
def negate(a): <NEW_LINE> <INDENT> res = 0 <NEW_LINE> d = 1 if a < 0 else -1 <NEW_LINE> while a != 0: <NEW_LINE> <INDENT> res += d <NEW_LINE> a += d <NEW_LINE> <DEDENT> return res
Negate an integer using add operator
625941bd377c676e912720a4
def pop_message(self): <NEW_LINE> <INDENT> logger.debug("Called pop_message on {type}.".format(type=self._clazz)) <NEW_LINE> self._new_messages.acquire() <NEW_LINE> with self._queue_access: <NEW_LINE> <INDENT> message = self._queue.popleft() <NEW_LINE> logger.debug('Messages waiting in queue: %d', len(self._queue)) <NEW_LINE> if not isinstance(message, self._clazz): <NEW_LINE> <INDENT> raise TypeError("Popped message is not type {_clazz} but type {type}:\n{msg}".format( _clazz=self._clazz, type=type(message), msg=message )) <NEW_LINE> <DEDENT> assert isinstance(message, Message) <NEW_LINE> assert isinstance(message, self._clazz) <NEW_LINE> return message
Get a message. :return:
625941bd7b180e01f3dc46fd
def remove(self,value): <NEW_LINE> <INDENT> prevnode = self.root <NEW_LINE> curnode = self.root.next <NEW_LINE> while curnode.next is not None: <NEW_LINE> <INDENT> if curnode.value == value: <NEW_LINE> <INDENT> prevnode.next = curnode.next <NEW_LINE> if curnode is self.tailnode: <NEW_LINE> <INDENT> self.tailnode = prevnode <NEW_LINE> <DEDENT> del curnode <NEW_LINE> self.length -= 1 <NEW_LINE> return 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> prevnode = curnode <NEW_LINE> <DEDENT> <DEDENT> return -1
删除包含值的一个节点,将其前一个节点的 next 指向被查询节点的下一个即可
625941bdd10714528d5ffbda
def transform(self, X): <NEW_LINE> <INDENT> return self.mdm.transform(X)
get the distance to each centroid. Parameters ---------- X : ndarray, shape (n_trials, n_channels, n_channels) ndarray of SPD matrices. Returns ------- dist : ndarray, shape (n_trials, n_cluster) the distance to each centroid according to the metric.
625941bd3cc13d1c6d3c7275
def init_completer(self): <NEW_LINE> <INDENT> from IPython.core.completerlib import (module_completer, magic_run_completer, cd_completer) <NEW_LINE> self.Completer = ZMQCompleter(self, self.client, config=self.config) <NEW_LINE> self.set_hook('complete_command', module_completer, str_key = 'import') <NEW_LINE> self.set_hook('complete_command', module_completer, str_key = 'from') <NEW_LINE> self.set_hook('complete_command', magic_run_completer, str_key = '%run') <NEW_LINE> self.set_hook('complete_command', cd_completer, str_key = '%cd') <NEW_LINE> if self.has_readline: <NEW_LINE> <INDENT> self.set_readline_completer()
Initialize the completion machinery. This creates completion machinery that can be used by client code, either interactively in-process (typically triggered by the readline library), programatically (such as in test suites) or out-of-prcess (typically over the network by remote frontends).
625941bd6aa9bd52df036c9d
def after_parsing(self): <NEW_LINE> <INDENT> super().after_parsing() <NEW_LINE> if self.searchtype == 'image': <NEW_LINE> <INDENT> for key, i in self.iter_serp_items(): <NEW_LINE> <INDENT> for regex in ( r'imgurl:"(?P<url>.*?)"', ): <NEW_LINE> <INDENT> result = re.search(regex, self.search_results[key][i]['link']) <NEW_LINE> if result: <NEW_LINE> <INDENT> self.search_results[key][i]['link'] = result.group('url') <NEW_LINE> break
Clean the urls. The image url data is in the m attribute. m={ns:"images.1_4",k:"5018",mid:"46CE8A1D71B04B408784F0219B488A5AE91F972E", surl:"http://berlin-germany.ca/",imgurl:"http://berlin-germany.ca/images/berlin250.jpg", oh:"184",tft:"45",oi:"http://berlin-germany.ca/images/berlin250.jpg"}
625941bddd821e528d63b0a5
def _writeViewSettings(self): <NEW_LINE> <INDENT> logger.debug("Writing view settings for window: {:d}".format(self._instance_nr)) <NEW_LINE> settings = get_qsettings() <NEW_LINE> settings.beginGroup(self._settings_group_name('view')) <NEW_LINE> self.obj_tree.write_view_settings("table/header_state", settings) <NEW_LINE> settings.setValue("central_splitter/state", self.central_splitter.saveState()) <NEW_LINE> settings.setValue("details_button_idx", self.button_group.checkedId()) <NEW_LINE> settings.setValue("main_window/pos", self.pos()) <NEW_LINE> settings.setValue("main_window/size", self.size()) <NEW_LINE> settings.endGroup()
Writes the view settings to the persistent store
625941bd0a366e3fb873e711
@under_name_scope() <NEW_LINE> def generate_fpn_proposals( multilevel_pred_boxes, multilevel_label_logits, image_shape2d): <NEW_LINE> <INDENT> num_lvl = len(cfg.FPN.ANCHOR_STRIDES) <NEW_LINE> assert len(multilevel_pred_boxes) == num_lvl <NEW_LINE> assert len(multilevel_label_logits) == num_lvl <NEW_LINE> training = get_current_tower_context().is_training <NEW_LINE> all_boxes = [] <NEW_LINE> all_scores = [] <NEW_LINE> if cfg.FPN.PROPOSAL_MODE == 'Level': <NEW_LINE> <INDENT> fpn_nms_topk = cfg.RPN.TRAIN_PER_LEVEL_NMS_TOPK if training else cfg.RPN.TEST_PER_LEVEL_NMS_TOPK <NEW_LINE> for lvl in range(num_lvl): <NEW_LINE> <INDENT> with tf.name_scope('Lvl{}'.format(lvl + 2)): <NEW_LINE> <INDENT> pred_boxes_decoded = multilevel_pred_boxes[lvl] <NEW_LINE> proposal_boxes, proposal_scores = generate_rpn_proposals( tf.reshape(pred_boxes_decoded, [-1, 4]), tf.reshape(multilevel_label_logits[lvl], [-1]), image_shape2d, fpn_nms_topk) <NEW_LINE> all_boxes.append(proposal_boxes) <NEW_LINE> all_scores.append(proposal_scores) <NEW_LINE> <DEDENT> <DEDENT> proposal_boxes = tf.concat(all_boxes, axis=0) <NEW_LINE> proposal_scores = tf.concat(all_scores, axis=0) <NEW_LINE> proposal_topk = tf.minimum(tf.size(proposal_scores), fpn_nms_topk) <NEW_LINE> proposal_scores, topk_indices = tf.nn.top_k(proposal_scores, k=proposal_topk, sorted=False) <NEW_LINE> proposal_boxes = tf.gather(proposal_boxes, topk_indices, name="all_proposals") <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> for lvl in range(num_lvl): <NEW_LINE> <INDENT> with tf.name_scope('Lvl{}'.format(lvl + 2)): <NEW_LINE> <INDENT> pred_boxes_decoded = multilevel_pred_boxes[lvl] <NEW_LINE> all_boxes.append(tf.reshape(pred_boxes_decoded, [-1, 4])) <NEW_LINE> all_scores.append(tf.reshape(multilevel_label_logits[lvl], [-1])) <NEW_LINE> <DEDENT> <DEDENT> all_boxes = tf.concat(all_boxes, axis=0) <NEW_LINE> all_scores = tf.concat(all_scores, axis=0) <NEW_LINE> proposal_boxes, proposal_scores = generate_rpn_proposals( all_boxes, all_scores, image_shape2d, cfg.RPN.TRAIN_PRE_NMS_TOPK if training else cfg.RPN.TEST_PRE_NMS_TOPK, cfg.RPN.TRAIN_POST_NMS_TOPK if training else cfg.RPN.TEST_POST_NMS_TOPK) <NEW_LINE> <DEDENT> tf.sigmoid(proposal_scores, name='probs') <NEW_LINE> return tf.stop_gradient(proposal_boxes, name='boxes'), tf.stop_gradient(proposal_scores, name='scores')
Args: multilevel_pred_boxes: #lvl H x Wx A x 4 boxes multilevel_label_logits: #lvl tensors of shape H x W x A Returns: boxes: k x 4 float scores: k logits
625941bd507cdc57c6306bce
def backward(self, delta): <NEW_LINE> <INDENT> if self.activation_type == "sigmoid": <NEW_LINE> <INDENT> grad = self.grad_sigmoid() <NEW_LINE> <DEDENT> elif self.activation_type == "tanh": <NEW_LINE> <INDENT> grad = self.grad_tanh() <NEW_LINE> <DEDENT> elif self.activation_type == "ReLU": <NEW_LINE> <INDENT> grad = self.grad_ReLU() <NEW_LINE> <DEDENT> return grad * delta
Compute the backward pass.
625941bd91f36d47f21ac3e9
def construct_object(self, node, deep=False): <NEW_LINE> <INDENT> if node in self.constructed_objects: <NEW_LINE> <INDENT> return self.constructed_objects[node] <NEW_LINE> <DEDENT> if deep: <NEW_LINE> <INDENT> old_deep = self.deep_construct <NEW_LINE> self.deep_construct = True <NEW_LINE> <DEDENT> if node in self.recursive_objects: <NEW_LINE> <INDENT> return self.recursive_objects[node] <NEW_LINE> <DEDENT> self.recursive_objects[node] = None <NEW_LINE> constructor = None <NEW_LINE> tag_suffix = None <NEW_LINE> if node.tag in self.yaml_constructors: <NEW_LINE> <INDENT> constructor = self.yaml_constructors[node.tag] <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> for tag_prefix in self.yaml_multi_constructors: <NEW_LINE> <INDENT> if node.tag.startswith(tag_prefix): <NEW_LINE> <INDENT> tag_suffix = node.tag[len(tag_prefix) :] <NEW_LINE> constructor = self.yaml_multi_constructors[tag_prefix] <NEW_LINE> break <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> if None in self.yaml_multi_constructors: <NEW_LINE> <INDENT> tag_suffix = node.tag <NEW_LINE> constructor = self.yaml_multi_constructors[None] <NEW_LINE> <DEDENT> elif None in self.yaml_constructors: <NEW_LINE> <INDENT> constructor = self.yaml_constructors[None] <NEW_LINE> <DEDENT> elif isinstance(node, ScalarNode): <NEW_LINE> <INDENT> constructor = self.__class__.construct_scalar <NEW_LINE> <DEDENT> elif isinstance(node, SequenceNode): <NEW_LINE> <INDENT> constructor = self.__class__.construct_sequence <NEW_LINE> <DEDENT> elif isinstance(node, MappingNode): <NEW_LINE> <INDENT> constructor = self.__class__.construct_mapping <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> if tag_suffix is None: <NEW_LINE> <INDENT> data = constructor(self, node) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> data = constructor(self, tag_suffix, node) <NEW_LINE> <DEDENT> if isinstance(data, types.GeneratorType): <NEW_LINE> <INDENT> generator = data <NEW_LINE> data = next(generator) <NEW_LINE> if self.deep_construct: <NEW_LINE> <INDENT> for _dummy in generator: <NEW_LINE> <INDENT> pass <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> self.state_generators.append(generator) <NEW_LINE> <DEDENT> <DEDENT> self.constructed_objects[node] = data <NEW_LINE> del self.recursive_objects[node] <NEW_LINE> if deep: <NEW_LINE> <INDENT> self.deep_construct = old_deep <NEW_LINE> <DEDENT> return data
deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction
625941bd187af65679ca5018
def process_base_url(self): <NEW_LINE> <INDENT> page_numbers = [] <NEW_LINE> soup_content = self.get_soup_content() <NEW_LINE> for link in soup_content.find_all('a'): <NEW_LINE> <INDENT> href = link['href'] <NEW_LINE> if self.page_indicator in href: <NEW_LINE> <INDENT> page_numbers.append(int(href.replace(self.page_indicator,''))) <NEW_LINE> <DEDENT> <DEDENT> self.pages_to_process = [self.page_indicator + str(p) for p in xrange(1, max(page_numbers) + 1)]
Here we figure out what the pages are that we need to process. We find the max paginator and append all possilbe pages to the self.pages_to_process list. We will process the companies on the base later when processing all of the pages that we figure out we need to process here.
625941bd91f36d47f21ac3ea
@app.route('/wiki/w/<search>') <NEW_LINE> def wikiIndex(search): <NEW_LINE> <INDENT> return render_template('wiki.html', search=search)
위키 사이트의 각 항목을 표시하는데 사용할 라우트입니다.
625941bd97e22403b379ce93
def status(update, context): <NEW_LINE> <INDENT> update.message.reply_text(str(update)+'\n'+str(context))
Docstr
625941bd85dfad0860c3ad53
def post(self, *evts, **kargs): <NEW_LINE> <INDENT> allowed_kargs = {'sm_state'} <NEW_LINE> if not set(kargs.keys()) <= allowed_kargs: <NEW_LINE> <INDENT> raise TypeError("Unexpected keyword argument(s) '%s'" % (list(kargs - allowed_kargs))) <NEW_LINE> <DEDENT> self._settled.clear() <NEW_LINE> for e in evts: <NEW_LINE> <INDENT> if e is None: <NEW_LINE> <INDENT> raise TypeError('Event posted to SM cannot be None.') <NEW_LINE> <DEDENT> self._event_queue.put( self._get_sm_state(e, sm_state=kargs.get('sm_state')))
Adds event(s) to the State Machine's input processing queue. Keyword arguments: sm_state: SMState to which the event should be posted. If not set and the StateMachine was created with a 'demux' argument, the 'demux' function will be used to determine which SMState should be used.
625941bd0a366e3fb873e712
def p_direct_abstract_declarator_5(self, p): <NEW_LINE> <INDENT> p[0] = c_ast.ArrayDecl( type=c_ast.TypeDecl(None, None, None), dim=c_ast.ID(p[3], self._coord(p.lineno(3))), coord=self._coord(p.lineno(1)))
direct_abstract_declarator : LBRACKET TIMES RBRACKET
625941bd5e10d32532c5ee21
def removeClef(self, pid, offset, partIndex): <NEW_LINE> <INDENT> return self.update(pid, "removeClef", (offset, partIndex), partIndex, offset)
remove-clef project-id offset partIndex Remove a clef
625941bdeab8aa0e5d26da58
def run(self): <NEW_LINE> <INDENT> comms = self._data['comms'] <NEW_LINE> protocol = comms.getProtocol() <NEW_LINE> while self._data['queue']: <NEW_LINE> <INDENT> response = self._data['queue'].pop(0) <NEW_LINE> comms.triggerReceiveWatchers(response) <NEW_LINE> if isinstance(response, protocol.responses.responseBasicDatalog): <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> self._controller.log( 'comms.handleReceivedPackets', 'DEBUG', '%s packet received and processed' % protocol.getPacketName(response.getPayloadIdInt()) )
Trigger events in gui or anything else required
625941bd50485f2cf553cc92
def split_nights(self): <NEW_LINE> <INDENT> bjds = np.array(list(self.data['bjd'])) <NEW_LINE> gaps = bjds[1:] - bjds[:-1] <NEW_LINE> gap_indices = np.where(gaps > 0.01)[0] <NEW_LINE> nights = [self.data[:gap_indices[0]]] <NEW_LINE> for i in range(1, len(gap_indices)-1): <NEW_LINE> <INDENT> nights.append(self.data[gap_indices[i]+1:gap_indices[i+1]]) <NEW_LINE> <DEDENT> nights.append(self.data[(gap_indices[-1]+1):]) <NEW_LINE> return nights
Split a star's time series data by night of observation. Intended for use only with ground-based telescopes. Parameters ---------- None Returns ------- nights: list List of astropy.table.Table objects, one for each night of observations.
625941bd85dfad0860c3ad54
def nanog_demo(): <NEW_LINE> <INDENT> np.random.seed(123456) <NEW_LINE> n_times = 20 <NEW_LINE> sim = sf.StemCellSwitch(n_times=n_times, real_time=10) <NEW_LINE> loss = lf.RegularisedEndLoss( target=50.0, target_ind=sim.output_vars.index('NANOG'), u_dim=1, time_ind=sim.n_times - 1, reg_weights=0.0) <NEW_LINE> gp = lf.FixedGaussGP( lengthscales=np.array([100.0 ** 2, 100.0 ** 2, 100.0 ** 2, 20 ** 2, 100.0 ** 2, 100.0 ** 2]), variance=5 ** 2, likelihood_variance=2 ** 2) <NEW_LINE> knots = np.array([0, 5, 10]) <NEW_LINE> knot_values = np.array([200.0, 150.0, 100.0]) <NEW_LINE> result_lst = lf.search_u(sim=sim, loss=loss, gp=gp, knots=knots, knot_values=knot_values, x0=np.zeros(len(sim.output_vars)), u_max_limit=1000.0, n_epochs=6-1, n_samples=10) <NEW_LINE> file_name = get_file_name('results/result_list_nanog_50_last.dmp') <NEW_LINE> with open(file_name, 'wb') as file_ptr: <NEW_LINE> <INDENT> pickle.dump(result_lst, file_ptr) <NEW_LINE> file_ptr.close()
Optimise input so that a target level of NANOG is produced in the Biomodel `Chickarmane2006 - Stem cell switch reversible <https://www.ebi.ac.uk/biomodels/BIOMD0000000203>`_ The model is simulated using a ODE solver of the `Tellurium <http://tellurium.analogmachine.org/>`_ package for biomolecular models. To optimise the output a Gaussian process state space model (GPSSM) is constructed from an initial and ``n_epochs`` follow-up experiments. All species levels of the system at a particular simulation step are input to the GP, and the increase or decrease in the next simulation step is the output, ie, there is one GP for each species (assumed to be independent conditional on the common input). The settings for the Gaussian process gp parameters (lengthscales for the inputs, variance, and error variance for the output squared exponential gp) are chosen manually to fit the range of the variables. The aim is to achieve a level of 50 for NANOG by simulation step 20 (real time 10). Input is only allowed at steps 0, 5, and 10. The input is limited to [0.0,1000.0]. The optimisation takes a few minutes on a typical workstation. However, depending on random settings it might take more or fewer epochs to find an input that induces the desired level of NANOG. Returns: Successive optimisation results are stored in a results file than can be loaded and displayed by running ``dynlearn/demo_plots.py`` Call for example under Unix by:: python3 dynlearn/demo.py python3 dynlearn/demo_plots.py display dynlearn/results/Nanog_target_50.png
625941bd3d592f4c4ed1cf6f
def welcome_messages_rules_show(id): <NEW_LINE> <INDENT> binding = {'id': id} <NEW_LINE> url = 'https://api.twitter.com/1.1/direct_messages/welcome_messages/rules/show.json' <NEW_LINE> return _TwitterRequest('GET', url, 'rest:direct_messages', 'get-direct-messages-welcome-messages-rules-show', binding)
Returns a Welcome Message Rule by the given id.
625941bd26238365f5f0ed65