code stringlengths 4 4.48k | docstring stringlengths 1 6.45k | _id stringlengths 24 24 |
|---|---|---|
def parse_synthentic_device_info(host_id, data): <NEW_LINE> <INDENT> for plugin, device_data in data.items(): <NEW_LINE> <INDENT> if plugin == 'linux_network': <NEW_LINE> <INDENT> if len(device_data['lnet']['nids']) > 0: <NEW_LINE> <INDENT> nid_tuples = [] <NEW_LINE> for name, nid in device_data['lnet']['nids'].items(): <NEW_LINE> <INDENT> nid_tuples.append(Nid.Nid(nid['nid_address'], nid['lnd_type'], nid['lnd_network'])) <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> nid_tuples = None <NEW_LINE> <DEDENT> synthetic_lnet_configuration(ManagedHost.objects.get(id = host_id), nid_tuples) | Parses the data returned from plugins for integration test purposes. On does lnet data because
at present that is all we need. | 625941be76d4e153a657ea46 |
def add_key_value(key, value, sample_dict): <NEW_LINE> <INDENT> sample_dict[key] = value <NEW_LINE> return sample_dict | function to add a key to a dictionary | 625941be046cf37aa974cc60 |
def roundvals(self, col: str, precision: int=2): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> self.df[col] = self.df[col].astype("float64") <NEW_LINE> self.df[col] = self.df[col].apply(lambda x: round(x, precision)) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> self.err(e, "Can not round column values") <NEW_LINE> return <NEW_LINE> <DEDENT> self.ok("Rounded values in column " + col) | Round floats in a column. Numbers are going to be
converted to floats if they are not already
:param col: column name
:type col: str
:param precision: float precision, defaults to 2
:param precision: int, optional
:example: ``ds.roundvals("mycol")`` | 625941be8da39b475bd64e86 |
def parse_match_ob(self, match_obj_tuples: tuple): <NEW_LINE> <INDENT> self.load_match_obj(match_obj_tuples) <NEW_LINE> self.convert_to_dict() | parse the matched object that is send in
:param match_obj_tuples: a matched object created with MDAC_BLOCK_REGEX
and then cast into tuples | 625941bea79ad161976cc05b |
def n_args(func): <NEW_LINE> <INDENT> parameters = inspect.signature(func).parameters.items() <NEW_LINE> nondefault = lambda param: param.default == inspect.Parameter.empty <NEW_LINE> min_args = len([name for name, param in parameters if nondefault(param)]) <NEW_LINE> max_args = len(parameters) <NEW_LINE> return min_args, max_args | Return (min_args, max_args) of input function. | 625941be91f36d47f21ac406 |
def orchestrate(mods, saltenv='base', test=None, exclude=None, pillar=None, pillarenv=None): <NEW_LINE> <INDENT> return _orchestrate(mods=mods, saltenv=saltenv, test=test, exclude=exclude, pillar=pillar, pillarenv=pillarenv) | .. versionadded:: Carbon
Execute the orchestrate runner from a masterless minion.
.. seealso:: More Orchestrate documentation
* :ref:`Full Orchestrate Tutorial <orchestrate-runner>`
* :py:mod:`Docs for the ``salt`` state module <salt.states.saltmod>`
CLI Examples:
.. code-block:: bash
salt-call --local state.orchestrate webserver
salt-call --local state.orchestrate webserver saltenv=dev test=True
salt-call --local state.orchestrate webserver saltenv=dev pillarenv=aws | 625941be3cc13d1c6d3c7291 |
def plotSetup(xmin = -6.0, xmax = 6.0, ymin = -2.0, ymax = 4.0): <NEW_LINE> <INDENT> fig = plt.figure() <NEW_LINE> ax = fig.add_subplot(111, aspect='equal') <NEW_LINE> plt.xlim([xmin, xmax]) <NEW_LINE> plt.ylim([ymin, ymax]) <NEW_LINE> ax.axes.set_xlim([xmin, xmax]) <NEW_LINE> centerAxes(ax) <NEW_LINE> return ax | basics of 2D plot setup | 625941be627d3e7fe0d68d64 |
def MI(*args, **kwargs): <NEW_LINE> <INDENT> return mutualInformation(*args, **kwargs) | Short-named version of MI calculation routine | 625941beb7558d58953c4e2f |
def get_plex_metadata(rating_key, part_id, item_type, plex_item=None): <NEW_LINE> <INDENT> if not plex_item: <NEW_LINE> <INDENT> plex_item = get_item(rating_key) <NEW_LINE> <DEDENT> if not plex_item: <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> current_part = get_part(plex_item, part_id) <NEW_LINE> if not current_part: <NEW_LINE> <INDENT> raise helpers.PartUnknownException("Part unknown") <NEW_LINE> <DEDENT> stream_info = get_plexapi_stream_info(plex_item, part_id) <NEW_LINE> if not stream_info: <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> if item_type == "episode": <NEW_LINE> <INDENT> tvdb_id = None <NEW_LINE> series_tvdb_id = None <NEW_LINE> if tvdb_guid_identifier in plex_item.guid: <NEW_LINE> <INDENT> tvdb_id = plex_item.guid[len(tvdb_guid_identifier):].split("?")[0] <NEW_LINE> series_tvdb_id = tvdb_id.split("/")[0] <NEW_LINE> <DEDENT> metadata = get_metadata_dict(plex_item, current_part, dict(stream_info, **{"plex_part": current_part, "type": "episode", "title": plex_item.title, "series": plex_item.show.title, "id": plex_item.rating_key, "series_id": plex_item.show.rating_key, "season_id": plex_item.season.rating_key, "imdb_id": None, "tvdb_id": tvdb_id, "super_thumb": plex_item.show.thumb, "series_tvdb_id": series_tvdb_id, "season": plex_item.season.index, "episode": plex_item.index }) ) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> imdb_id = None <NEW_LINE> original_title = plex_item.title_original <NEW_LINE> if imdb_guid_identifier in plex_item.guid: <NEW_LINE> <INDENT> imdb_id = plex_item.guid[len(imdb_guid_identifier):].split("?")[0] <NEW_LINE> <DEDENT> metadata = get_metadata_dict(plex_item, current_part, dict(stream_info, **{"plex_part": current_part, "type": "movie", "title": plex_item.title, "id": plex_item.rating_key, "series_id": None, "season_id": None, "imdb_id": imdb_id, "year": plex_item.year, "tvdb_id": None, "super_thumb": plex_item.thumb, "series_tvdb_id": None, "original_title": original_title, "season": None, "episode": None, "section": plex_item.section.title}) ) <NEW_LINE> <DEDENT> return metadata | uses the Plex 3rd party API accessor to get metadata information
:param rating_key: movie or episode
:param part_id:
:param item_type:
:return: | 625941be956e5f7376d70d85 |
def random_rotations_between_grid_interaction_layers_circuit( qubits: Iterable['cirq.GridQubit'], depth: int, *, two_qubit_op_factory: Callable[ ['cirq.GridQubit', 'cirq.GridQubit', 'np.random.RandomState'], 'cirq.OP_TREE' ] = lambda a, b, _: ops.CZPowGate()(a, b), pattern: Sequence[GridInteractionLayer] = GRID_STAGGERED_PATTERN, single_qubit_gates: Sequence['cirq.Gate'] = ( ops.X ** 0.5, ops.Y ** 0.5, ops.PhasedXPowGate(phase_exponent=0.25, exponent=0.5), ), add_final_single_qubit_layer: bool = True, seed: 'cirq.RANDOM_STATE_OR_SEED_LIKE' = None, ) -> 'cirq.Circuit': <NEW_LINE> <INDENT> prng = value.parse_random_state(seed) <NEW_LINE> qubits = list(qubits) <NEW_LINE> coupled_qubit_pairs = _coupled_qubit_pairs(qubits) <NEW_LINE> circuit = circuits.Circuit() <NEW_LINE> previous_single_qubit_layer = ops.Moment() <NEW_LINE> single_qubit_layer_factory = _single_qubit_gates_arg_to_factory( single_qubit_gates=single_qubit_gates, qubits=qubits, prng=prng ) <NEW_LINE> for i in range(depth): <NEW_LINE> <INDENT> single_qubit_layer = single_qubit_layer_factory.new_layer(previous_single_qubit_layer) <NEW_LINE> circuit += single_qubit_layer <NEW_LINE> two_qubit_layer = _two_qubit_layer( coupled_qubit_pairs, two_qubit_op_factory, pattern[i % len(pattern)], prng ) <NEW_LINE> circuit += two_qubit_layer <NEW_LINE> previous_single_qubit_layer = single_qubit_layer <NEW_LINE> <DEDENT> if add_final_single_qubit_layer: <NEW_LINE> <INDENT> circuit += single_qubit_layer_factory.new_layer(previous_single_qubit_layer) <NEW_LINE> <DEDENT> return circuit | Generate a random quantum circuit of a particular form.
This construction is based on the circuits used in the paper
https://www.nature.com/articles/s41586-019-1666-5.
The generated circuit consists of a number of "cycles", this number being
specified by `depth`. Each cycle is actually composed of two sub-layers:
a layer of single-qubit gates followed by a layer of two-qubit gates,
controlled by their respective arguments, see below. The pairs of qubits
in a given entangling layer is controlled by the `pattern` argument,
see below.
Args:
qubits: The qubits to use.
depth: The number of cycles.
two_qubit_op_factory: A callable that returns a two-qubit operation.
These operations will be generated with calls of the form
`two_qubit_op_factory(q0, q1, prng)`, where `prng` is the
pseudorandom number generator.
pattern: A sequence of GridInteractionLayers, each of which determine
which pairs of qubits are entangled. The layers in a pattern are
iterated through sequentially, repeating until `depth` is reached.
single_qubit_gates: Single-qubit gates are selected randomly from this
sequence. No qubit is acted upon by the same single-qubit gate in
consecutive cycles. If only one choice of single-qubit gate is
given, then this constraint is not enforced.
add_final_single_qubit_layer: Whether to include a final layer of
single-qubit gates after the last cycle.
seed: A seed or random state to use for the pseudorandom number
generator. | 625941be656771135c3eb782 |
def makeTests(testDataDir, testClass, fnmatchGlob="*"): <NEW_LINE> <INDENT> for localId in os.listdir(testDataDir): <NEW_LINE> <INDENT> if (fnmatch.fnmatch(localId, fnmatchGlob) and not fnmatch.fnmatch(localId, '*.json')): <NEW_LINE> <INDENT> path = os.path.join(testDataDir, localId) <NEW_LINE> tester = testClass(localId, path) <NEW_LINE> for name, _ in inspect.getmembers(testClass): <NEW_LINE> <INDENT> if name.startswith("test"): <NEW_LINE> <INDENT> yield _wrapTestMethod(getattr(tester, name)) | Top-level entry point for data driven tests. For every subdirectory
in testDataDir, create an instance of testClass and then yield
each of its testMethods in a format suitable for use with nose
test generators. | 625941be15baa723493c3e8a |
@app.route('/editshopping/', methods=['GET', 'POST']) <NEW_LINE> def editshopping(): <NEW_LINE> <INDENT> if g.user: <NEW_LINE> <INDENT> if request.method == "POST": <NEW_LINE> <INDENT> old = request.form['old'] <NEW_LINE> sentence = request.form['shoppinglistname'] <NEW_LINE> postlist = sentence.split(' ') <NEW_LINE> shoppinglistname = ''.join(postlist) <NEW_LINE> owner = session['email'] <NEW_LINE> result = Newshoppinglist.edit(old, shoppinglistname, owner) <NEW_LINE> if result == 1: <NEW_LINE> <INDENT> message = "shopping list successfully updated" <NEW_LINE> result = Newshoppinglist.get_myshopping_lists(owner) <NEW_LINE> shoppingitems = Newshoppinglist.getitems() <NEW_LINE> return render_template('dashboard.html', success=message, datas=result, items=shoppingitems, owner=owner) <NEW_LINE> <DEDENT> elif result == 2: <NEW_LINE> <INDENT> message = "shopping list not found" <NEW_LINE> result = Newshoppinglist.get_myshopping_lists(owner) <NEW_LINE> shoppingitems = Newshoppinglist.getitems() <NEW_LINE> return render_template('dashboard.html', data=message, datas=result, items=shoppingitems, owner=owner) <NEW_LINE> <DEDENT> elif result == 3: <NEW_LINE> <INDENT> result = Newshoppinglist.get_myshopping_lists(owner) <NEW_LINE> shoppingitems = Newshoppinglist.getitems() <NEW_LINE> message = "shopping list not found" <NEW_LINE> return render_template('dashboard.html', data=message, datas=result, items=shoppingitems, owner=owner) <NEW_LINE> <DEDENT> elif result == 4: <NEW_LINE> <INDENT> result = Newshoppinglist.get_myshopping_lists(owner) <NEW_LINE> shoppingitems = Newshoppinglist.getitems() <NEW_LINE> message = "special characters not allowed" <NEW_LINE> return render_template('dashboard.html', data=message, datas=result, items=shoppingitems, owner=owner) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return render_template('login.html') | defining route to edit a shoppinglist | 625941be07d97122c417879c |
def get_union_address_url(url): <NEW_LINE> <INDENT> data = requester(url) <NEW_LINE> a = data.find('table').find_all('a', href = re.compile('.*/ws/index')) <NEW_LINE> speech_urls = [] <NEW_LINE> for link in a: <NEW_LINE> <INDENT> speech_urls.append(link['href']) <NEW_LINE> <DEDENT> return speech_urls | input: state of the union page (http://www.presidency.ucsb.edu/sou.php)
output: list of urls for each of the state of the union addresses | 625941be94891a1f4081b9be |
def brpop(self, keys, timeout=0): <NEW_LINE> <INDENT> if timeout is None: <NEW_LINE> <INDENT> timeout = 0 <NEW_LINE> <DEDENT> keys = list_or_args(keys, None) <NEW_LINE> keys.append(timeout) <NEW_LINE> return self.execute_command('BRPOP', *keys) | RPOP a value off of the first non-empty list
named in the ``keys`` list.
If none of the lists in ``keys`` has a value to RPOP, then block
for ``timeout`` seconds, or until a value gets pushed on to one
of the lists.
If timeout is 0, then block indefinitely. | 625941be5166f23b2e1a506f |
def GetChildrenDepth(self): <NEW_LINE> <INDENT> return _itkSpatialObjectToImageFilterPython.itkSpatialObjectToImageFilterSO3IUS3_GetChildrenDepth(self) | GetChildrenDepth(self) -> unsigned int | 625941be796e427e537b04da |
def print_table(data_view): <NEW_LINE> <INDENT> data_view['date'] = data_view['transaction_date_yyyy_mm_dd'] <NEW_LINE> table_data_view = data_view[['date', 'amount', 'account', 'fullname']] <NEW_LINE> print(tabulate(table_data_view, showindex=False, tablefmt='fancy_grid', headers='keys')) | Print a human-readable summary of the data.
| 625941be7d847024c06be1cf |
def _action(self, time_step: ts.TimeStep, policy_state: types.NestedTensor, seed: Optional[types.Seed] = None) -> policy_step.PolicyStep: <NEW_LINE> <INDENT> seed_stream = tfp.util.SeedStream(seed=seed, salt='tf_agents_tf_policy') <NEW_LINE> distribution_step = self._distribution(time_step, policy_state) <NEW_LINE> actions = tf.nest.map_structure( lambda d: reparameterized_sampling.sample(d, seed=seed_stream()), distribution_step.action) <NEW_LINE> info = distribution_step.info <NEW_LINE> if self.emit_log_probability: <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> log_probability = tf.nest.map_structure(lambda a, d: d.log_prob(a), actions, distribution_step.action) <NEW_LINE> info = policy_step.set_log_probability(info, log_probability) <NEW_LINE> <DEDENT> except: <NEW_LINE> <INDENT> raise TypeError('%s does not support emitting log-probabilities.' % type(self).__name__) <NEW_LINE> <DEDENT> <DEDENT> return distribution_step._replace(action=actions, info=info) | Implementation of `action`.
Args:
time_step: A `TimeStep` tuple corresponding to `time_step_spec()`.
policy_state: A Tensor, or a nested dict, list or tuple of Tensors
representing the previous policy_state.
seed: Seed to use if action performs sampling (optional).
Returns:
A `PolicyStep` named tuple containing:
`action`: An action Tensor matching the `action_spec`.
`state`: A policy state tensor to be fed into the next call to action.
`info`: Optional side information such as action log probabilities. | 625941be01c39578d7e74d51 |
def activity_post_modify_object(sender, instance, created=None, **kwargs): <NEW_LINE> <INDENT> verb = None <NEW_LINE> obj_type = instance.__class__._meta.object_name.lower() <NEW_LINE> action_settings = defaultdict(lambda: dict(actor=getattr(instance, "owner", None), action_object=instance, created_verb=_('created'), deleted_verb=_('deleted'), obj_type=obj_type, object_name=getattr(instance, 'name', None), target=None, updated_verb=_('updated'), )) <NEW_LINE> try: <NEW_LINE> <INDENT> action_settings['map'].update(object_name=getattr(instance, 'title', None),) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> logger.exception(e) <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> action_settings['comment'].update(actor=getattr(instance, 'author', None), created_verb=_("added a comment"), target=getattr(instance, 'content_object', None), updated_verb=_("updated a comment"), ) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> logger.exception(e) <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> action_settings['layer'].update(created_verb=_('uploaded')) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> logger.exception(e) <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> action_settings['document'].update(created_verb=_('uploaded')) <NEW_LINE> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> logger.exception(e) <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> action = action_settings[obj_type] <NEW_LINE> if created: <NEW_LINE> <INDENT> verb = action.get('created_verb') <NEW_LINE> raw_action = 'created' <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> if created is False: <NEW_LINE> <INDENT> if not isinstance(instance, Layer) and not isinstance(instance, Document) and not isinstance(instance, Map): <NEW_LINE> <INDENT> verb = action.get('updated_verb') <NEW_LINE> raw_action = 'updated' <NEW_LINE> <DEDENT> <DEDENT> if created is None: <NEW_LINE> <INDENT> verb = action.get('deleted_verb') <NEW_LINE> raw_action = 'deleted' <NEW_LINE> action.update(action_object=None, target=None) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> logger.exception(e) <NEW_LINE> <DEDENT> if verb: <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> activity.send(action.get('actor'), verb="{verb}".format(verb=verb), action_object=action.get('action_object'), target=action.get('target', None), object_name=action.get('object_name'), raw_action=raw_action, ) <NEW_LINE> <DEDENT> except Exception: <NEW_LINE> <INDENT> logger.warning('The activity received a non-actionable Model or None as the actor/action.') | Creates new activities after a Map, Layer, Document, or Comment is created/updated/deleted.
action_settings:
actor: the user who performed the activity
action_object: the object that received the action
created_verb: a translatable verb that is used when an object is created
deleted_verb: a translatable verb that is used when an object is deleted
object_name: the title of the object that is used to keep information about the object after it is deleted
target: the target of an action (if a comment is added to a map, the comment is the object the map is the target)
updated_verb: a translatable verb that is used when an object is updated
raw_action: a constant that describes the type of action performed (values should be: created, uploaded, deleted) | 625941bebf627c535bc130e5 |
def test_crouch(self): <NEW_LINE> <INDENT> sentence = 'move crouched to the end of the corridor' <NEW_LINE> vec = MovementNN().run(sentence) <NEW_LINE> expected = np.array([0, 1, 0, 0, 1, 0]) <NEW_LINE> assert np.array_equal(vec, expected) | Tests for a high response for a crouched stance. | 625941be4c3428357757c240 |
def muted(self, ns): <NEW_LINE> <INDENT> self.mlock.acquire() <NEW_LINE> r = ns.lower() in self.mutes <NEW_LINE> self.mlock.release() <NEW_LINE> return r | Check if a channel is muted. | 625941beadb09d7d5db6c6a8 |
def set_assistive_touch(self, enabled: bool): <NEW_LINE> <INDENT> self.set_value("com.apple.Accessibility", "AssistiveTouchEnabledByiTunes", enabled) | show or close screen assitive touch button
Raises:
ServiceError | 625941be8a349b6b435e808a |
def get_text(path): <NEW_LINE> <INDENT> if not os.path.isfile(path): <NEW_LINE> <INDENT> return None <NEW_LINE> <DEDENT> fp = open(path, 'r') <NEW_LINE> contents = fp.read() <NEW_LINE> fp.close() <NEW_LINE> return contents | Return a string with the textual contents of a file at PATH. | 625941be31939e2706e4cd84 |
def parsed_special(self, length=1, label=TOK_SPECIAL): <NEW_LINE> <INDENT> self.__any_tokens = False <NEW_LINE> self.mark_tokens(length, label) <NEW_LINE> self.position += length | Parse a token that's not repeatable | 625941be26068e7796caebf1 |
def click_open_judgement_folder_button(self): <NEW_LINE> <INDENT> talk_set_console.open_judgement_folder() | 点击打开判断模型文件夹按钮 | 625941bed6c5a10208143f5f |
def create_dataset(posts, events): <NEW_LINE> <INDENT> posts = posts.set_index("post_permlink") <NEW_LINE> dataset = pd.DataFrame(columns=["user_id", "post_permlink"]) <NEW_LINE> for user in tqdm(events["user_id"].unique()): <NEW_LINE> <INDENT> user_events = events[(events["user_id"] == user) & (events["like"] > 0.5)] <NEW_LINE> similar_posts = [posts.loc[post]["similar_posts"] for post in user_events["post_permlink"] if post in posts.index] <NEW_LINE> similar_posts = [post for posts in similar_posts for post in posts] <NEW_LINE> similar_distances = [posts.loc[post]["similar_distances"] for post in user_events["post_permlink"] if post in posts.index] <NEW_LINE> similar_distances = [distance for distances in similar_distances for distance in distances] <NEW_LINE> seen_similar_posts = set(user_events["post_permlink"]) <NEW_LINE> unseen_similar_distances = np.array([float(distance) for index, distance in enumerate(similar_distances) if similar_posts[index] not in seen_similar_posts]) <NEW_LINE> unseen_similar_posts = [post for post in similar_posts if (post not in seen_similar_posts) and (post != "@/")] <NEW_LINE> if len(unseen_similar_posts) > 0: <NEW_LINE> <INDENT> unseen_similar_probabilities = np.array([1./len(unseen_similar_posts)] * len(unseen_similar_posts)) <NEW_LINE> if (unseen_similar_distances.sum() > 0): <NEW_LINE> <INDENT> unseen_similar_probabilities = (unseen_similar_distances.max() - unseen_similar_distances) / ((unseen_similar_distances.max() - unseen_similar_distances).sum()) <NEW_LINE> <DEDENT> selected_similar_posts = np.unique(np.random.choice(unseen_similar_posts, size=USERS_POSTS_LIMIT, p=unseen_similar_probabilities)) <NEW_LINE> user_dataset = pd.DataFrame() <NEW_LINE> user_dataset["post_permlink"] = selected_similar_posts <NEW_LINE> user_dataset["user_id"] = user <NEW_LINE> dataset = pd.concat([dataset, user_dataset]) <NEW_LINE> <DEDENT> <DEDENT> dataset["like"] = 1 <NEW_LINE> return dataset | Function to generate pairs with posts similar to viewed for each user | 625941be377c676e912720c0 |
def distance(lon1, lat1, lon2, lat2): <NEW_LINE> <INDENT> pi = np.pi <NEW_LINE> radius = 6371. <NEW_LINE> dlat = deg2rad(lat2 - lat1) / 2.0 <NEW_LINE> dlon = deg2rad(lon2 - lon1) / 2.0 <NEW_LINE> a = sin(dlat)**2 + cos(deg2rad(lat1)) * cos(deg2rad(lat2)) * sin(dlon)**2 <NEW_LINE> d = 2.0 * radius * arctan2(sqrt(a), sqrt(1 - a)) <NEW_LINE> return d | Calculate great-circle distance in kilometers
Parameters
----------
lon1 : float
Longitude of first point in degrees
lat1 : float
Latitude of first point in degrees
lon2 : float
Longitude of second point in degrees
lat2 : float
Latitude of second point in degrees
Returns
-------
d : float
Great-circle distance in kilometers | 625941be379a373c97cfaa5a |
def show_textfile(self, filename, title=''): <NEW_LINE> <INDENT> win = tk.Toplevel(background=self.bg_colour) <NEW_LINE> win.title(title) <NEW_LINE> txt = tkScrolledText(win, width=80,font=self.txtfont) <NEW_LINE> txt.config(state="normal") <NEW_LINE> with open(filename,'r') as f: <NEW_LINE> <INDENT> text = f.read() <NEW_LINE> <DEDENT> txt.insert('1.0', text) <NEW_LINE> txt.config(state="disabled") <NEW_LINE> txt.grid(column=0, row=0, padx=5, pady=5, sticky="NSEW") <NEW_LINE> xbtn = ttk.Button(win, text='Close', command=win.destroy) <NEW_LINE> xbtn.grid(column=0, row=1, padx=5, pady=5, sticky="E") <NEW_LINE> win.rowconfigure(0, weight=1) <NEW_LINE> win.columnconfigure(0, weight=1) | Show a text file in a new window. | 625941be293b9510aa2c31af |
def readandexec(self, conn, addr): <NEW_LINE> <INDENT> data = [] <NEW_LINE> while 1: <NEW_LINE> <INDENT> c = conn.recv(1) <NEW_LINE> if not c: <NEW_LINE> <INDENT> raise IOError('Connection closed') <NEW_LINE> return <NEW_LINE> <DEDENT> c = c.decode() <NEW_LINE> if c == '\0': <NEW_LINE> <INDENT> exec(''.join(data)) <NEW_LINE> return <NEW_LINE> <DEDENT> data.append(c) | execute file data | 625941beb7558d58953c4e30 |
def pyre_interactiveSessionContext(self, context=None): <NEW_LINE> <INDENT> context = context or {} <NEW_LINE> context['flo'] = flo <NEW_LINE> return super().pyre_interactiveSessionContext(context=context) | Go interactive | 625941be435de62698dfdb62 |
def save(self, filename): <NEW_LINE> <INDENT> pass | Save a team's info to disk. | 625941be1b99ca400220a9c8 |
def best_fit_line_length(self, vmin=20, vmax=700, component='m'): <NEW_LINE> <INDENT> L = np.sqrt((self.delta_x_dash(vmin=vmin, vmax=vmax, component=component)) ** 2 + (self.delta_y_dash(vmin=vmin, vmax=vmax, component=component)) ** 2) <NEW_LINE> return L | Args:
vmin:
vmax:
component: | 625941be656771135c3eb783 |
def readMesh( self, fpath: Path, render=False, autoclear=False ) -> o3d.geometry.TriangleMesh: <NEW_LINE> <INDENT> mesh = o3d.io.read_triangle_mesh(fpath.as_posix()) <NEW_LINE> if render: <NEW_LINE> <INDENT> self.renderMesh(mesh, autoclear=autoclear) <NEW_LINE> <DEDENT> return mesh | read in a ply file and return the mesh, render if necessary
Args:
autoclear: if true, clear the scene, otherwise, leave it alone
render: if true, render the ply file
fpath: File path to a data file. ex: a .ply file
Returns:
the mesh being rendered | 625941be23849d37ff7b2fa7 |
def forward(self, actions, global_idxes): <NEW_LINE> <INDENT> batch_size = actions.size()[0] <NEW_LINE> x = self.h1(actions) <NEW_LINE> state = x <NEW_LINE> global_idxes = global_idxes.clone() <NEW_LINE> sieve = alive_sieve.AliveSieve(batch_size=batch_size, enable_cuda=x.is_cuda) <NEW_LINE> type_constr = torch.cuda if x.is_cuda else torch <NEW_LINE> last_token = type_constr.LongTensor(batch_size).fill_(0) <NEW_LINE> utterance = type_constr.LongTensor(batch_size, self.utterance_max).fill_(0) <NEW_LINE> N_outer = type_constr.LongTensor(batch_size).fill_(self.utterance_max) <NEW_LINE> stochastic_trajectory = StochasticTrajectory() <NEW_LINE> for t in range(self.utterance_max): <NEW_LINE> <INDENT> emb = self.d2e(last_token) <NEW_LINE> state = self.rnn(emb, state) <NEW_LINE> token_logits = self.e2d(state) <NEW_LINE> token_probs = F.softmax(token_logits, dim=-1) <NEW_LINE> if self.training: <NEW_LINE> <INDENT> s = rl_common.draw_categorical_sample( action_probs=token_probs, batch_idxes=global_idxes[sieve.global_idxes]) <NEW_LINE> stochastic_trajectory.append_stochastic_sample(s=s) <NEW_LINE> token = s.actions.view(-1) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> _, token = token_probs.max(-1) <NEW_LINE> <DEDENT> utterance[:, t][sieve.global_idxes] = token <NEW_LINE> last_token = token <NEW_LINE> sieve.mark_dead(last_token == 0) <NEW_LINE> sieve.set_global_dead(N_outer, t) <NEW_LINE> if sieve.all_dead(): <NEW_LINE> <INDENT> break <NEW_LINE> <DEDENT> state = state[sieve.alive_idxes] <NEW_LINE> last_token = last_token[sieve.alive_idxes] <NEW_LINE> sieve.self_sieve_() <NEW_LINE> <DEDENT> res = { 'stochastic_trajectory': stochastic_trajectory, 'utterance': utterance, 'utterance_lens': N_outer } <NEW_LINE> return res | This agent will receive the image of the world
x might have been sieved. global_idxes too. but global_idxes contents are the global
indexes | 625941be63d6d428bbe44406 |
def compute_logits(self, token_ids: tf.Tensor, training: bool) -> tf.Tensor: <NEW_LINE> <INDENT> return rnn_output_logits | Implements a language model, where each output is conditional on the current
input and inputs processed so far.
Args:
token_ids: int32 tensor of shape [B, T], storing integer IDs of tokens.
training: Flag indicating if we are currently training (used to toggle dropout)
Returns:
tf.float32 tensor of shape [B, T, V], storing the distribution over output symbols
for each timestep for each batch element. | 625941bed18da76e235323ea |
def animal_attribute(self, legs, eyes): <NEW_LINE> <INDENT> self.legs = legs <NEW_LINE> self.eyes = eyes <NEW_LINE> print(" It has ", self.legs, "legs") <NEW_LINE> print(" It has ", self.eyes, "eyes") | This is the function to define the legs and eyes of animal. | 625941be24f1403a92600a80 |
def comprobar_sesion(ids=None, function=None): <NEW_LINE> <INDENT> def _dec(view_func): <NEW_LINE> <INDENT> def _view(request, *args, **kwargs): <NEW_LINE> <INDENT> if ids is not None: <NEW_LINE> <INDENT> for clave in ids: <NEW_LINE> <INDENT> if clave not in request.session: <NEW_LINE> <INDENT> return devolver_mensaje(request, "Error de acceso") <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return view_func(request, *args, **kwargs) <NEW_LINE> <DEDENT> _view.__name__ = view_func.__name__ <NEW_LINE> _view.__dict__ = view_func.__dict__ <NEW_LINE> _view.__doc__ = view_func.__doc__ <NEW_LINE> return _view <NEW_LINE> <DEDENT> if function is None: <NEW_LINE> <INDENT> return _dec <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return _dec(function) | Comprueba que los ids de las sesiones esten correctamente | 625941becc40096d61595869 |
def number_to_pattern(number, length): <NEW_LINE> <INDENT> idx = 'ACGT' <NEW_LINE> pattern = '' <NEW_LINE> while number > 0: <NEW_LINE> <INDENT> pattern += idx[number % 4] <NEW_LINE> number //= 4 <NEW_LINE> <DEDENT> return idx[0] * (length - len(pattern)) + pattern[::-1] | Computes dna pattern for given number (1M in BA_AALA)
:param number:
:param length: length of returned kmer
:return: kmer for which pattern_to_number returns `number` | 625941be7cff6e4e8111789d |
def getAction(self, gameState): <NEW_LINE> <INDENT> bestAction = "" <NEW_LINE> v = -float('inf') <NEW_LINE> for action in gameState.getLegalActions(): <NEW_LINE> <INDENT> tempVal = self.minValue(gameState.generateSuccessor(self.index, action), 0, 1) <NEW_LINE> if tempVal > v: <NEW_LINE> <INDENT> v = tempVal <NEW_LINE> bestAction = action <NEW_LINE> <DEDENT> <DEDENT> return bestAction | Returns the minimax action using self.depth and self.evaluationFunction | 625941be4527f215b584c371 |
def test_load_register_zero_page_x_can_load_a_value_into_the_register_when_it_wraps(cpu): <NEW_LINE> <INDENT> expected_cycles = 4 <NEW_LINE> stored_value = 0x37 <NEW_LINE> cpu.X = 0xFF <NEW_LINE> cpu.Memory[0xFFFC] = OpCodes.INS_LDA_ZPX <NEW_LINE> cpu.Memory[0xFFFD] = 0x80 <NEW_LINE> cpu.Memory[0x007F] = stored_value <NEW_LINE> cpu_copy = copy.copy(cpu) <NEW_LINE> cycles_used = cpu.execute(expected_cycles) <NEW_LINE> AssertThat(cpu.A).IsEqualTo(stored_value) <NEW_LINE> AssertThat(cycles_used).IsEqualTo(expected_cycles) <NEW_LINE> AssertThat(cpu.Flag.Z).IsFalsy() <NEW_LINE> AssertThat(cpu.Flag.N).IsFalsy() <NEW_LINE> verify_unmodified_flags_from_loading_register(cpu, cpu_copy) | LDAZeroPageXCanLoadAValueIntoTheARegisterWhenItWraps | 625941be66656f66f7cbc0c1 |
def vtysh_config_available(bindir, confdir): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> cmd = [str(bindir + '/vtysh'), '--config_dir', confdir, '-c', 'conf t'] <NEW_LINE> output = subprocess.check_output(cmd).strip() <NEW_LINE> if 'VTY configuration is locked by other VTY' in output.decode('utf-8'): <NEW_LINE> <INDENT> print(output) <NEW_LINE> log.error("'%s' returned\n%s\n" % (' '.join(cmd), output)) <NEW_LINE> return False <NEW_LINE> <DEDENT> <DEDENT> except subprocess.CalledProcessError as e: <NEW_LINE> <INDENT> msg = "vtysh could not connect with any frr daemons" <NEW_LINE> print(msg) <NEW_LINE> log.error(msg) <NEW_LINE> return False <NEW_LINE> <DEDENT> return True | Return False if no frr daemon is running or some other vtysh session is
in 'configuration terminal' mode which will prevent us from making any
configuration changes. | 625941be236d856c2ad446ed |
def wrapWeakref(referent): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> return weakref.ref(referent) <NEW_LINE> <DEDENT> except TypeError: <NEW_LINE> <INDENT> return referent | utility function that wraps objects as weakrefs but does not wrap
already wrapped objects; also prevents wrapping the unwrappable "None" type, etc.
>>> import weakref
>>> class Mock:
... pass
>>> a1 = Mock()
>>> ref1 = common.wrapWeakref(a1)
>>> ref1
<weakref at 0x101f29ae8; to 'Mock' at 0x101e45358>
>>> ref2 = common.wrapWeakref(ref1)
>>> ref2
<weakref at 0x101f299af; to 'Mock' at 0x101e45358>
>>> ref3 = common.wrapWeakref(5)
>>> ref3
5 | 625941bea219f33f34628884 |
def interpret_value(self, val: typing.Union[str, int, float]) -> typing.List[str]: <NEW_LINE> <INDENT> vals = FieldInfo.interpret_value(self, val) <NEW_LINE> return vals | Return user provided operand interpreted as a date.
@param val: The original user provided operand value.
@return: Date transformed string. | 625941be379a373c97cfaa5b |
def _make_string_stats_proto(string_stats, total_num_values ): <NEW_LINE> <INDENT> result = statistics_pb2.StringStatistics() <NEW_LINE> if total_num_values > 0: <NEW_LINE> <INDENT> result.avg_length = string_stats.total_bytes_length / total_num_values <NEW_LINE> <DEDENT> return result | Convert the partial string statistics into StringStatistics proto. | 625941be63d6d428bbe44407 |
def distort(self, x1=0,y1=0, x2=0,y2=0, x3=0,y3=0, x4=0,y4=0): <NEW_LINE> <INDENT> w, h = self.img.size <NEW_LINE> quad = (-x1,-y1, -x4,h-y4, w-x3,w-y3, w-x2,-y2) <NEW_LINE> self.img = self.img.transform(self.img.size, Image.QUAD, quad, INTERPOLATION) | Distorts the layer.
Distorts the layer by translating
the four corners of its bounding box to the given coordinates:
upper left (x1,y1), upper right(x2,y2),
lower right (x3,y3) and lower left (x4,y4). | 625941becad5886f8bd26ef1 |
def RemoveIamPolicyBinding(self, subscription_ref, member, role): <NEW_LINE> <INDENT> policy = self.GetIamPolicy(subscription_ref) <NEW_LINE> iam_util.RemoveBindingFromIamPolicy(policy, member, role) <NEW_LINE> return self.SetIamPolicy(subscription_ref, policy) | Removes an IAM Policy binding from a Subscription.
Args:
subscription_ref (Resource): Resource reference for subscription to
remove IAM policy binding from.
member (str): The member to add.
role (str): The role to assign to the member.
Returns:
Policy: the updated policy.
Raises:
api_exception.HttpException: If either of the requests failed. | 625941be8e7ae83300e4aee3 |
def __init__(self, msg): <NEW_LINE> <INDENT> super(InvalidBLEAdvertisingDataException, self).__init__(msg) | Constructor
Args:
msg (str): The message to raise. | 625941be1f5feb6acb0c4a6b |
def max_sum_recursive(nums): <NEW_LINE> <INDENT> return _max_sum_recursive(nums, 0, len(nums)-1) | Divide and conquer. Assumes non-empty list.
t: O(nlogn)
s: O(logn) | 625941be8e05c05ec3eea289 |
def pca(arr_x, dim=None): <NEW_LINE> <INDENT> if dim is None: <NEW_LINE> <INDENT> dim = arr_x.shape[0] <NEW_LINE> <DEDENT> xxt = arr_x.dot(arr_x.T) <NEW_LINE> u_x, lambda_x, _ = np.linalg.svd(xxt) <NEW_LINE> return u_x[:, 0:dim], np.sqrt(lambda_x[0:dim]) | Computes first dim principles axes of arr_x.
Note that this implementation is only equivalent to PCA on a centered
and scaled data matrix, regardless of whether arr_x is normalized.
Args:
arr_x: Data matrix.
dim: Number of components to compute
Returns:
u_x: The principal vectors in column order.
lambda_x: The loadings vector. | 625941be009cb60464c632cb |
def remove(self, path, recursive=True): <NEW_LINE> <INDENT> if not self.exists(path): <NEW_LINE> <INDENT> logger.debug('Could not delete %s; path does not exist', path) <NEW_LINE> return False <NEW_LINE> <DEDENT> (bucket, key) = self._path_to_bucket_and_key(path) <NEW_LINE> if self._is_root(key): <NEW_LINE> <INDENT> raise InvalidDeleteException( 'Cannot delete root of bucket at path %s' % path) <NEW_LINE> <DEDENT> s3_bucket = self.s3.get_bucket(bucket, validate=True) <NEW_LINE> s3_key = s3_bucket.get_key(key) <NEW_LINE> if s3_key: <NEW_LINE> <INDENT> s3_bucket.delete_key(s3_key) <NEW_LINE> logger.debug('Deleting %s from bucket %s', key, bucket) <NEW_LINE> return True <NEW_LINE> <DEDENT> if self.isdir(path) and not recursive: <NEW_LINE> <INDENT> raise InvalidDeleteException( 'Path %s is a directory. Must use recursive delete' % path) <NEW_LINE> <DEDENT> delete_key_list = [ k for k in s3_bucket.list(self._add_path_delimiter(key))] <NEW_LINE> if len(delete_key_list) > 0: <NEW_LINE> <INDENT> for k in delete_key_list: <NEW_LINE> <INDENT> logger.debug('Deleting %s from bucket %s', k, bucket) <NEW_LINE> <DEDENT> s3_bucket.delete_keys(delete_key_list) <NEW_LINE> return True <NEW_LINE> <DEDENT> return False | Remove a file or directory from S3. | 625941bef8510a7c17cf9612 |
def _get_hw_led_channels(self, channel): <NEW_LINE> <INDENT> if channel == 'sync' or len(self._led_names) == 1: <NEW_LINE> <INDENT> return list(range(len(self._led_names))) <NEW_LINE> <DEDENT> if channel in self._led_names: <NEW_LINE> <INDENT> return [self._led_names.index(channel)] <NEW_LINE> <DEDENT> raise ValueError(f'unknown channel, should be one of: {_quoted("sync", *self._led_names)}') | This will get a list of all the led channels that the command should be sent to
It will look up the name of the led channel given and return a list of the real led device number | 625941be167d2b6e31218aad |
def load(path, group=None, sel=None, unpack=False): <NEW_LINE> <INDENT> with tables.open_file(path, mode='r') as h5file: <NEW_LINE> <INDENT> pathtable = dict() <NEW_LINE> if group is not None: <NEW_LINE> <INDENT> if isinstance(group, str): <NEW_LINE> <INDENT> data = _load_specific_level(h5file, h5file, group, sel=sel, pathtable=pathtable) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> data = [] <NEW_LINE> for g in group: <NEW_LINE> <INDENT> data_i = _load_specific_level(h5file, h5file, g, sel=sel, pathtable=pathtable) <NEW_LINE> data.append(data_i) <NEW_LINE> <DEDENT> data = tuple(data) <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> grp = h5file.root <NEW_LINE> auto_unpack = (IO_UNPACK in grp._v_attrs and grp._v_attrs[IO_UNPACK]) <NEW_LINE> do_unpack = unpack or auto_unpack <NEW_LINE> if do_unpack and len(grp._v_children) == 1: <NEW_LINE> <INDENT> name = next(iter(grp._v_children)) <NEW_LINE> data = _load_specific_level(h5file, grp, name, sel=sel, pathtable=pathtable) <NEW_LINE> do_unpack = False <NEW_LINE> <DEDENT> elif sel is not None: <NEW_LINE> <INDENT> raise ValueError("Must specify group with `sel` unless it " "automatically unpacks") <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> data = _load_level(h5file, grp, pathtable) <NEW_LINE> <DEDENT> if do_unpack and isinstance(data, dict) and len(data) == 1: <NEW_LINE> <INDENT> data = next(iter(data.values())) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return data | Loads an HDF5 saved with `save`.
This function requires the `PyTables <http://www.pytables.org/>`_ module to
be installed.
Parameters
----------
path : string
Filename from which to load the data.
group : string or list
Load a specific group in the HDF5 hierarchy. If `group` is a list of
strings, then a tuple will be returned with all the groups that were
specified.
sel : slice or tuple of slices
If you specify `group` and the target is a numpy array, then you can
use this to slice it. This is useful for opening subsets of large HDF5
files. To compose the selection, you can use `hdf5io.aslice`.
unpack : bool
If True, a single-entry dictionaries will be unpacked and the value
will be returned directly. That is, if you save ``dict(a=100)``, only
``100`` will be loaded.
Returns
-------
data : anything
Hopefully an identical reconstruction of the data that was saved.
See also
--------
save | 625941be30c21e258bdfa3b3 |
def authenticate(username, password): <NEW_LINE> <INDENT> user = UserModel.find_by_username(username) <NEW_LINE> if user and safe_str_cmp(user.password, password): <NEW_LINE> <INDENT> return user | Function that gets called when a user calls /auth endpoint
with their username and password
:param username: User's username in string format
:param password: User's un-encrypted password in string format
:return: A UserModel object if authentication was successful, None otherwise | 625941be30dc7b7665901881 |
def _mobile(self): <NEW_LINE> <INDENT> boundary = self._boundary() <NEW_LINE> def free_plane(neighbors): <NEW_LINE> <INDENT> if len(neighbors) == 0: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> if len(neighbors) == 1: <NEW_LINE> <INDENT> return True <NEW_LINE> <DEDENT> if len(neighbors) > 2: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> for c1, c2 in itertools.combinations(neighbors, 2): <NEW_LINE> <INDENT> diff = c1 - c2 <NEW_LINE> if abs(diff[0]) == 2 or abs(diff[1]) == 2 or abs(diff[2]) == 2: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> <DEDENT> return True <NEW_LINE> <DEDENT> mobile = set() <NEW_LINE> for cube in boundary: <NEW_LINE> <INDENT> neighbors_xy = [] <NEW_LINE> for dx, dy in zip(Config.dx, Config.dy): <NEW_LINE> <INDENT> ncube = cube + (dx, dy, 0) <NEW_LINE> if cube != ncube and self._has_cube(ncube): <NEW_LINE> <INDENT> neighbors_xy.append(ncube) <NEW_LINE> <DEDENT> <DEDENT> neighbors_xz = [] <NEW_LINE> for dx, dz in zip(Config.dx, Config.dz): <NEW_LINE> <INDENT> ncube = cube + (dx, 0, dz) <NEW_LINE> if cube != ncube and self._has_cube(ncube): <NEW_LINE> <INDENT> neighbors_xz.append(ncube) <NEW_LINE> <DEDENT> <DEDENT> neighbors_yz = [] <NEW_LINE> for dy, dz in zip(Config.dy, Config.dz): <NEW_LINE> <INDENT> ncube = cube + (0, dy, dz) <NEW_LINE> if cube != ncube and self._has_cube(ncube): <NEW_LINE> <INDENT> neighbors_yz.append(ncube) <NEW_LINE> <DEDENT> <DEDENT> if free_plane(neighbors_xy) or free_plane(neighbors_xz) or free_plane(neighbors_yz): <NEW_LINE> <INDENT> mobile.add(cube) <NEW_LINE> <DEDENT> <DEDENT> return mobile | Return the mobile cubes of the configuration.
We only care about the mobile cubes that are on the boundary. | 625941be5f7d997b871749ac |
def pkg_security(pkgs): <NEW_LINE> <INDENT> security_packages = Utils().read_file("/etc/slpkg/pkg_security") <NEW_LINE> packages = [] <NEW_LINE> for read in security_packages.splitlines(): <NEW_LINE> <INDENT> read = read.lstrip() <NEW_LINE> if not read.startswith("#"): <NEW_LINE> <INDENT> packages.append(read.replace("\n", "")) <NEW_LINE> <DEDENT> <DEDENT> for p in pkgs: <NEW_LINE> <INDENT> for pkg in packages: <NEW_LINE> <INDENT> if p == pkg: <NEW_LINE> <INDENT> Msg().security_pkg(p) <NEW_LINE> if not Msg().answer() in ["y", "Y"]: <NEW_LINE> <INDENT> raise SystemExit() | Check packages before install or upgrade for security
reasons. Configuration file in the /etc/slpkg/pkg_security | 625941be57b8e32f524833b1 |
def backproject(self): <NEW_LINE> <INDENT> self._pyhough_back = _pyhoughback_pywrap.Back(self.transform, self.theta, self.rho, self.nx, self.ny) <NEW_LINE> self.image = self._pyhough_back.backproject() <NEW_LINE> return self.image | image = houghback.backproject() | 625941be96565a6dacc8f5e4 |
def write_vals(self, vals): <NEW_LINE> <INDENT> self._buffer = np.roll(self._buffer, len(vals)) <NEW_LINE> self._buffer[0:len(vals)] = vals | write a list of vals to buffer | 625941bebaa26c4b54cb103a |
def get_short_name(self): <NEW_LINE> <INDENT> return self.short_name | Get short (first) name of a user. | 625941bed4950a0f3b08c268 |
def matrix(self, theta: float, **kwargs) -> np.ndarray: <NEW_LINE> <INDENT> c = np.cos(theta / 2) <NEW_LINE> s = np.sin(theta / 2) <NEW_LINE> matrix = np.array([[c, -1j * s], [-1j * s, c]], dtype=complex) <NEW_LINE> return matrix | The definition of the gate as a unitary matrix
Args:
theta: Angle theta of the rotation, in interval 0 to 2 :math:`2 \pi`
**kwargs: Additional keyword arguments
Returns:
np.ndarray | 625941be3c8af77a43ae36b5 |
def get_attach_file_name(self): <NEW_LINE> <INDENT> if self.filename: <NEW_LINE> <INDENT> raise Exception('Please attach the file!\n' 'Use attach_file("FILE_PATH") to attach file in mail.') <NEW_LINE> <DEDENT> return self.filename | get attached file name
:return: file name | 625941be507cdc57c6306bec |
def CheckDeregisterCollation(self): <NEW_LINE> <INDENT> con = sqlite.connect(":memory:") <NEW_LINE> con.create_collation("mycoll", lambda x, y: (x > y) - (x < y)) <NEW_LINE> con.create_collation("mycoll", None) <NEW_LINE> try: <NEW_LINE> <INDENT> con.execute("select 'a' as x union select 'b' as x order by x collate mycoll") <NEW_LINE> self.fail("should have raised an OperationalError") <NEW_LINE> <DEDENT> except sqlite.OperationalError as e: <NEW_LINE> <INDENT> if not e.args[0].startswith("no such collation sequence"): <NEW_LINE> <INDENT> self.fail("wrong OperationalError raised") | Register a collation, then deregister it. Make sure an error is raised if we try
to use it. | 625941be6fb2d068a760efb2 |
@patch('static_replace.staticfiles_storage', autospec=True) <NEW_LINE> @patch('xmodule.modulestore.django.modulestore', autospec=True) <NEW_LINE> @override_settings(STATIC_URL='https://example.com/static/') <NEW_LINE> def test_static_url_with_xblock_resource_on_cdn(mock_modulestore, mock_storage): <NEW_LINE> <INDENT> mock_storage.exists.return_value = False <NEW_LINE> mock_modulestore.return_value = Mock(MongoModuleStore) <NEW_LINE> pre_text = 'EMBED src ="https://example.com/static/xblock/resources/tehehe.xblock/public/images/woo.png"' <NEW_LINE> post_text = pre_text <NEW_LINE> assert replace_static_urls(pre_text, DATA_DIRECTORY, COURSE_KEY) == post_text | Make sure that for URLs with XBlock resource URL, which start with /static/,
we don't rewrite them, even if these are served from an absolute URL like a CDN. | 625941be8e7ae83300e4aee4 |
def test_successful_login_logout(self): <NEW_LINE> <INDENT> rv = self.login(GOOD_USERNAME, GOOD_PASSWORD) <NEW_LINE> assert b'You were logged in' in rv.data <NEW_LINE> rv = self.logout() <NEW_LINE> assert b'You were logged out' in rv.data | Make sure login and logout succeeds | 625941be3346ee7daa2b2c82 |
def set_global_options(self, data): <NEW_LINE> <INDENT> for member in data: <NEW_LINE> <INDENT> if hasattr(self, member): <NEW_LINE> <INDENT> setattr(self, member, str(data[member])) <NEW_LINE> <DEDENT> <DEDENT> return | :param data: class data from the settings.py file if configured
:return: | 625941be6e29344779a6252c |
def purge_cursors(self): <NEW_LINE> <INDENT> new = [] <NEW_LINE> ref = [] <NEW_LINE> for cursor in self.cursors: <NEW_LINE> <INDENT> if not cursor.tuple() in ref: <NEW_LINE> <INDENT> ref.append(cursor.tuple()) <NEW_LINE> new.append(cursor) <NEW_LINE> <DEDENT> <DEDENT> self.cursors = new <NEW_LINE> self.render() | Remove duplicate cursors that have the same position. | 625941be66673b3332b91fa8 |
@pytest.fixture(scope="module") <NEW_LINE> def ppg_generator(): <NEW_LINE> <INDENT> df = pd.read_csv(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data', 'test_data_ppg.csv'), index_col=None) <NEW_LINE> df = pd.DataFrame( index=pd.to_datetime(df["index"].values), data=df["PPG"].values, columns=["PPG"] ) <NEW_LINE> ppg_generator = ReadData(df[:200]) <NEW_LINE> return ppg_generator | Create object to mimic data streaming | 625941bed4950a0f3b08c269 |
@app.route('/temp/<filename>') <NEW_LINE> def uploaded_file(filename): <NEW_LINE> <INDENT> return send_from_directory(app.config['UPLOAD_FOLDER'], filename) | Allows us to serve temporarily hosted images in the temp directory | 625941be9b70327d1c4e0ceb |
def get(self, post_id): <NEW_LINE> <INDENT> title = 'Edit post ' + post_id <NEW_LINE> permalink = '/blog/'+ post_id <NEW_LINE> page = {'subject':title, 'permalink':permalink} <NEW_LINE> params = {'site':SITEWIDE_PARAMS, 'page':page} <NEW_LINE> self.render("editpost.html", params=params) | page get | 625941bed58c6744b4257b78 |
def test_responses( self, request, kube_apis, crd_ingress_controller_with_ap, backend_setup, test_namespace ): <NEW_LINE> <INDENT> print("------------- Run test for AP policy: file-block --------------") <NEW_LINE> print(f"Request URL: {backend_setup.req_url} and Host: {backend_setup.ingress_host}") <NEW_LINE> ensure_response_from_backend( backend_setup.req_url, backend_setup.ingress_host, check404=True ) <NEW_LINE> print("----------------------- Send request ----------------------") <NEW_LINE> resp = requests.get( f"{backend_setup.req_url}/test.bat", headers={"host": backend_setup.ingress_host}, verify=False ) <NEW_LINE> print(resp.text) <NEW_LINE> assert valid_resp_body in resp.text <NEW_LINE> assert resp.status_code == 200 | Test file-block AppProtect policy with -watch-namespace | 625941be55399d3f055885cb |
def change(self, amount, coins): <NEW_LINE> <INDENT> dp = [0] * (amount + 1) <NEW_LINE> dp[0] = 1 <NEW_LINE> for i in coins: <NEW_LINE> <INDENT> for j in range(1, amount + 1): <NEW_LINE> <INDENT> if j >= i: <NEW_LINE> <INDENT> dp[j] += dp[j - i] <NEW_LINE> <DEDENT> print(dp) <NEW_LINE> <DEDENT> <DEDENT> return dp[amount] | :type amount: int
:type coins: List[int]
:rtype: int | 625941be0a366e3fb873e72f |
def __init__(self, columns): <NEW_LINE> <INDENT> self.columns = columns <NEW_LINE> self.column_names = tuple([column.column_name for column in self.columns]) <NEW_LINE> self.select_from_clause = "SELECT %s FROM %s" % (", ".join(self.column_names), self.columns[0].table_name) <NEW_LINE> self.where_clause = None <NEW_LINE> self.orderby_clause = None <NEW_LINE> self.limit_clause = None <NEW_LINE> self.offset_clause = None <NEW_LINE> self.distinct_clause = None <NEW_LINE> if len([column for column in self.columns if column.is_pickletype]) == 0: <NEW_LINE> <INDENT> self.default_record_converter = self.nonpicklize_record <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.default_record_converter = self.picklize_record | To create a Select object, you have to name a list of Column object has to select. And
use where(), limit(), offset(), distinct(), order_by() method to specify your selection. | 625941beab23a570cc250098 |
def izip(a, b): <NEW_LINE> <INDENT> p = len(a) <NEW_LINE> q = len(b) <NEW_LINE> m = p if p < q else q <NEW_LINE> i = 0 <NEW_LINE> while True: <NEW_LINE> <INDENT> if i < m: <NEW_LINE> <INDENT> yield a[i] + b[i] <NEW_LINE> i += 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> i += 1 <NEW_LINE> break | simulate izip method in itertools | 625941bec432627299f04b5c |
def range(self, start=0, end=None): <NEW_LINE> <INDENT> if start == 0 and end is None: <NEW_LINE> <INDENT> return self <NEW_LINE> <DEDENT> lo = 0 <NEW_LINE> hi = len(self.tokens) <NEW_LINE> while lo < hi: <NEW_LINE> <INDENT> mid = (lo + hi) // 2 <NEW_LINE> if start > self.tokens[mid].pos: <NEW_LINE> <INDENT> lo = mid + 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> hi = mid <NEW_LINE> <DEDENT> <DEDENT> start = lo <NEW_LINE> if end is not None: <NEW_LINE> <INDENT> lo = 0 <NEW_LINE> hi = len(self.tokens) <NEW_LINE> while lo < hi: <NEW_LINE> <INDENT> mid = (lo + hi) // 2 <NEW_LINE> if end < self.tokens[mid].pos: <NEW_LINE> <INDENT> hi = mid <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> lo = mid + 1 <NEW_LINE> <DEDENT> <DEDENT> end = lo - 1 <NEW_LINE> <DEDENT> s = slice(start, end) <NEW_LINE> n = type(self).__new__(type(self)) <NEW_LINE> n._d = self._d <NEW_LINE> n.tokens = self.tokens[s] <NEW_LINE> n.classes = self.classes[s] <NEW_LINE> return n | Return a new instance of the DocInfo class for the selected range.
Only the tokens completely contained within the range start..end are
added to the new instance. This can be used to perform fast searches
on a subset of a document. | 625941be82261d6c526ab3b4 |
def vertical(self, axis=0): <NEW_LINE> <INDENT> tiles = self.tiles.T <NEW_LINE> n = tiles.shape[1] <NEW_LINE> for i, row in enumerate(tiles): <NEW_LINE> <INDENT> if np.count_nonzero(row) != n: <NEW_LINE> <INDENT> row = np.array(list(filter(None, row))) <NEW_LINE> if axis == 1: <NEW_LINE> <INDENT> tiles[i] = np.append(np.zeros(n - len(row)), row) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> tiles[i] = np.append(row, np.zeros(n - len(row))) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> tiles = tiles.T | FIXME (REFACTOR)
Args:
axis:
Returns: | 625941be8e05c05ec3eea28a |
def __init__(self, path: str): <NEW_LINE> <INDENT> self.path = path <NEW_LINE> self.recipes = [] <NEW_LINE> self.rec_counter = 0 | Constructor
:param path: path to the website | 625941bea8370b77170527b8 |
def create_model_vgg16(num_classes, input_shape): <NEW_LINE> <INDENT> base_model = keras.applications.vgg16.VGG16(weights='imagenet', include_top=False, input_shape=input_shape) <NEW_LINE> x = base_model.output <NEW_LINE> x = keras.layers.GlobalAveragePooling2D()(x) <NEW_LINE> x = keras.layers.Dense(num_classes)(x) <NEW_LINE> model = keras.models.Model(inputs=base_model.input, outputs=x) <NEW_LINE> return model | モデル作成(VGG16)
Parameters
----------
num_classes : int
クラス数
input_shape : int
入力画像サイズ | 625941beec188e330fd5a6bc |
def post(self): <NEW_LINE> <INDENT> wssk = self.request.get('websafeSessionKey') <NEW_LINE> ConferenceApi._checkForFeaturedSpeaker(wssk) | Check if there is more than one session by a speaker single conference | 625941bebe383301e01b53a3 |
def movement(self, screen_size: tuple, dt: float, player: Player): <NEW_LINE> <INDENT> if self.collidesWith(player): <NEW_LINE> <INDENT> self.color = (0, 255, 0) <NEW_LINE> self.run = False <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.color = (255, 0, 0) <NEW_LINE> <DEDENT> if self.y != player.y: <NEW_LINE> <INDENT> self.y += int(self.speed * ((player.y-self.y)/(abs(player.y-self.y))) * dt) <NEW_LINE> <DEDENT> if self.x != player.x: <NEW_LINE> <INDENT> self.x += int(self.speed * ((player.x-self.x)/(abs(player.x-self.x))) * dt) <NEW_LINE> <DEDENT> if self.y != player.y: <NEW_LINE> <INDENT> self.y += int(self.speed * ((player.y - self.y) / (abs(player.y - self.y))) * dt) <NEW_LINE> <DEDENT> if self.x != player.x: <NEW_LINE> <INDENT> self.x += int(self.speed * ((player.x - self.x) / (abs(player.x - self.x))) * dt) | Draws the player on the screen.
Keyword arguments:
screen_size -- the size of the display window
dt -- a multiplier for movement amount based on fps (60 / fps) | 625941bee5267d203edcdbb7 |
def train_bn(X, Y): <NEW_LINE> <INDENT> data = make_bn_input_data(X=X, Y=Y) <NEW_LINE> learner = PGMLearner() <NEW_LINE> bn = learner.discrete_estimatebn(data) <NEW_LINE> return bn | Trains bayesian network on X and Y vectors.
:param X: pd.DataFrame of predictors from read_and_preproc_data().
:param Y: pd.DataFrames of response from read_and_preproc_data().
:return: libpgm.discretebayesiannetwork.DiscreteBayesianNetwork | 625941bea219f33f34628885 |
def __init__(self, fname, get): <NEW_LINE> <INDENT> absolute_path = "%s%s" % (core.config.PATH_WWW, fname.rstrip("/")) <NEW_LINE> if os.path.isdir(absolute_path) is True: <NEW_LINE> <INDENT> for item in core.config.INDEX_FNAMES: <NEW_LINE> <INDENT> absolute_path_item = "%s/%s" % (absolute_path, item) <NEW_LINE> self.retrieve(absolute_path_item, get) <NEW_LINE> if self.response == response_code(200): <NEW_LINE> <INDENT> break <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> self.retrieve(absolute_path, get) | Sets the absolute path and iterates over index files in directories. | 625941bef9cc0f698b140515 |
def logp(self, value): <NEW_LINE> <INDENT> mu = self.mu <NEW_LINE> lam = self.lam <NEW_LINE> alpha = self.alpha <NEW_LINE> return bound(logpow(lam / (2. * np.pi), 0.5) - logpow(value - alpha, 1.5) - (0.5 * lam / (value - alpha) * ((value - alpha - mu) / mu)**2), value > 0, value - alpha > 0, mu > 0, lam > 0, alpha >= 0) | Calculate log-probability of Wald distribution at specified value.
Parameters
----------
value : numeric
Value(s) for which log-probability is calculated. If the log probabilities for multiple
values are desired the values must be provided in a numpy array or theano tensor
Returns
-------
TensorVariable | 625941be0a366e3fb873e730 |
def show_join(handle, sid, chan): <NEW_LINE> <INDENT> from x84.bbs import getsession, getterminal <NEW_LINE> session, term = getsession(), getterminal() <NEW_LINE> return u''.join(( time.strftime('%H:%M'), u' ', term.blue('-'), u'!', term.blue('-'), u' ', term.bold_cyan(handle), u' ', (u''.join((term.bold_black('['), term.cyan(sid), term.bold_black(']'), u' ',)) if 'sysop' in session.user.groups else u''), 'has joined ', term.bold(chan),)) | return terminal sequence for /join performed by handle. | 625941bea8ecb033257d2fe6 |
def addSingleTraitTerm(self,K=None,is_noise=False,normalize=True,Ks=None): <NEW_LINE> <INDENT> assert self.P == 1, 'Incompatible number of traits' <NEW_LINE> assert K!=None or is_noise, 'Specify covariance structure' <NEW_LINE> if is_noise: <NEW_LINE> <INDENT> assert self.noisPos==None, 'noise term already exists' <NEW_LINE> K = SP.eye(self.Nt) <NEW_LINE> self.noisPos = self.n_terms <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> assert K.shape[0]==self.Nt, 'Incompatible shape' <NEW_LINE> assert K.shape[1]==self.Nt, 'Incompatible shape' <NEW_LINE> <DEDENT> if Ks!=None: <NEW_LINE> <INDENT> assert Ks.shape[0]==self.N, 'Incompatible shape' <NEW_LINE> <DEDENT> if normalize: <NEW_LINE> <INDENT> Norm = 1/K.diagonal().mean() <NEW_LINE> K *= Norm <NEW_LINE> if Ks!=None: Ks *= Norm <NEW_LINE> <DEDENT> self.vd.addTerm(limix.CSingleTraitTerm(K)) <NEW_LINE> if Ks!=None: self.setKstar(self.n_terms,Ks) <NEW_LINE> self.n_terms+=1 <NEW_LINE> self.gp = None <NEW_LINE> self.init = False <NEW_LINE> self.fast = False <NEW_LINE> self.optimum = None <NEW_LINE> self.cache['Sigma'] = None <NEW_LINE> self.cache['Hessian'] = None <NEW_LINE> self.cache['Lparams'] = None <NEW_LINE> self.cache['paramsST']= None | add random effects term for single trait models (no trait-trait covariance matrix)
Args:
K: NxN sample covariance matrix
is_noise: bool labeling the noise term (noise term has K=eye)
normalize: if True, K and Ks are scales such that K.diagonal().mean()==1
Ks: NxN test cross covariance for predictions | 625941bed6c5a10208143f60 |
def iround(i): <NEW_LINE> <INDENT> return int(round(i) - .5) + (i > 0) | round to the nearest integer | 625941be71ff763f4b54959f |
def __init__(self, url, user, password, size=-1, m_date=None): <NEW_LINE> <INDENT> super(FakeTransport, self).__init__(url, user, password) <NEW_LINE> self.size = size <NEW_LINE> self.m_date = m_date | Create an instance of a data transport mechanism.
:param url: The URL of the resource to be obtained.
:type url: str
:param user: The user name required to access the resource specified in
`url` (optional).
:type user: str
:param password: The password required to access the resource specified
in `url` (optional).
:type password: str | 625941be925a0f43d2549d8c |
def retrieve_all(self, sids, default_none=False): <NEW_LINE> <INDENT> hits, missing, failures = {}, set(), [] <NEW_LINE> for sid in sids: <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> asset = self._asset_cache[sid] <NEW_LINE> if not default_none and asset is None: <NEW_LINE> <INDENT> raise SidsNotFound(sids=[sid]) <NEW_LINE> <DEDENT> hits[sid] = asset <NEW_LINE> <DEDENT> except KeyError: <NEW_LINE> <INDENT> missing.add(sid) <NEW_LINE> <DEDENT> <DEDENT> if not missing: <NEW_LINE> <INDENT> return [hits[sid] for sid in sids] <NEW_LINE> <DEDENT> update_hits = hits.update <NEW_LINE> type_to_assets = self.group_by_type(missing) <NEW_LINE> failures = {failure: None for failure in type_to_assets.pop(None, ())} <NEW_LINE> update_hits(failures) <NEW_LINE> self._asset_cache.update(failures) <NEW_LINE> if failures and not default_none: <NEW_LINE> <INDENT> raise SidsNotFound(sids=list(failures)) <NEW_LINE> <DEDENT> update_hits(self.retrieve_equities(type_to_assets.pop('equity', ()))) <NEW_LINE> update_hits( self.retrieve_futures_contracts(type_to_assets.pop('future', ())) ) <NEW_LINE> if type_to_assets: <NEW_LINE> <INDENT> raise AssertionError( "Found asset types: %s" % list(type_to_assets.keys()) ) <NEW_LINE> <DEDENT> return [hits[sid] for sid in sids] | Retrieve all assets in `sids`.
Parameters
----------
sids : iterable of int
Assets to retrieve.
default_none : bool
If True, return None for failed lookups.
If False, raise `SidsNotFound`.
Returns
-------
assets : list[int or None]
A list of the same length as `sids` containing Assets (or Nones)
corresponding to the requested sids.
Raises
------
SidsNotFound
When a requested sid is not found and default_none=False. | 625941bea8370b77170527b9 |
def _tif_management(self, path): <NEW_LINE> <INDENT> _tif_paths = [i for i in glob2.iglob(os.path.join(path, "*.tif")) if "cal_image" not in i] <NEW_LINE> _tifs = {self._extract_index(i):{"original":i} for i in _tif_paths if "_bin.tif" not in i} <NEW_LINE> _binary_mask = [i for i in _tif_paths if "_bin.tif" in i] <NEW_LINE> for i in _binary_mask: <NEW_LINE> <INDENT> _tifs[self._extract_index(i)]["binary"]=i <NEW_LINE> <DEDENT> return list(_tifs.values()) | given a folder path, get all the paths of original image and binary masks
Args:
path: the path string
Returns:
list[dict]: a list of dictionary, sorted by their batch number
each element dictionary has path of the binary mask
and the original image | 625941be26238365f5f0ed83 |
def testMinimalFields(self): <NEW_LINE> <INDENT> store = self.parse('---\n' 'name: The Grand Petstore\n') <NEW_LINE> self.assertEquals('The Grand Petstore', store.name) <NEW_LINE> self.assertEquals(None, store.address) <NEW_LINE> self.assertEquals(None, store.pets) <NEW_LINE> self.assertEquals(None, store.mascot) | Test the case where minimal required fields are parsed. | 625941be4527f215b584c372 |
def testMovePrivateToPublic(self): <NEW_LINE> <INDENT> src_repo = SourceRepository.objects.get(name="public-repo") <NEW_LINE> src_repo.anonymous_access = False <NEW_LINE> src_repo.save() <NEW_LINE> self.assertEquals(src_repo.repo_path, os.path.abspath(os.path.join(PRIVATE_REPO_DIR, 'public-repo'))) <NEW_LINE> self.assertTrue(os.path.exists(src_repo.repo_path)) | Switching the anonymous_access flag should change the location of the
repository | 625941be6aa9bd52df036cbb |
def pop_up_normal(self, title, msg): <NEW_LINE> <INDENT> popup = tk.Tk() <NEW_LINE> popup.wm_title(title) <NEW_LINE> label = ttk.Label(popup, text=msg, font=self.normal_font) <NEW_LINE> label.pack(side="top", fill="x", pady=10) <NEW_LINE> button = ttk.Button(popup, text="Close", command=popup.destroy) <NEW_LINE> button.pack() <NEW_LINE> popup.mainloop() | This will show a normal style popup window.
:param msg: The message you want to show in the popup window
:param title: The title of the popup window | 625941bea79ad161976cc05d |
def __init__(self, requester, view_manager, taskeditor=None): <NEW_LINE> <INDENT> self.__requester = requester <NEW_LINE> self.__view_manager = view_manager <NEW_LINE> self.selection_changed_callback_listeners = [] <NEW_LINE> if taskeditor: <NEW_LINE> <INDENT> self.__ui = taskeditor <NEW_LINE> self.__builder = self.__ui.get_builder() <NEW_LINE> self.__toolbar = self.__builder.get_object('task_tb1') <NEW_LINE> self.__task_id = taskeditor.get_task() <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.__ui = self.__view_manager.get_browser() <NEW_LINE> self.__builder = self.__ui.get_builder() <NEW_LINE> self.__toolbar = self.__builder.get_object('task_toolbar') <NEW_LINE> self.__task_id = None <NEW_LINE> self.__view_manager.browser.selection.connect( "changed", self.__selection_changed) <NEW_LINE> <DEDENT> self.taskwidget_id = 0 <NEW_LINE> self.taskwidget_widg = [] | Construct a PluginAPI object.
@param requester: The requester.
@param view_manager: The view manager
@param task_id: The Editor, if we are in one
otherwise. | 625941bea17c0f6771cbdf6b |
def sync_vcpus_topology(self): <NEW_LINE> <INDENT> if not self.cpu.has_topology(): <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> if not self.vcpus: <NEW_LINE> <INDENT> self.vcpus = self.cpu.vcpus_from_topology() <NEW_LINE> <DEDENT> self.cpu.set_topology_defaults(self.vcpus) | <cpu> topology count and <vcpus> always need to match. Handle
the syncing here since we are less constrained then doing it
in CPU set_defaults | 625941be3317a56b86939b78 |
def _generic_signature_fn(examples, unused_features, predictions): <NEW_LINE> <INDENT> tensors = {'inputs': examples} <NEW_LINE> if not isinstance(predictions, dict): <NEW_LINE> <INDENT> predictions = {'outputs': predictions} <NEW_LINE> <DEDENT> tensors.update(predictions) <NEW_LINE> default_signature = exporter.generic_signature(tensors) <NEW_LINE> return default_signature, {} | Creates generic signature from given examples and predictions.
This is neeed for backward compatibility with default behaviour of
export_estimator.
Args:
examples: `Tensor`.
unused_features: `dict` of `Tensor`s.
predictions: `dict` of `Tensor`s.
Returns:
Tuple of default signature and named signature. | 625941be50485f2cf553ccb1 |
def costFunction(x, dist, intrinsics, X, Pc, detections, args, handles): <NEW_LINE> <INDENT> X.fromVector(list(x)) <NEW_LINE> cost = [] <NEW_LINE> costFunction.counter += 1 <NEW_LINE> multiples = [100*n for n in range(1, 100+1)] <NEW_LINE> if not handles: <NEW_LINE> <INDENT> handles = detections <NEW_LINE> <DEDENT> for detection, handle in zip(detections, handles): <NEW_LINE> <INDENT> camera = [camera for camera in X.cameras if camera.id == detection.camera[1:]][0] <NEW_LINE> aruco = [aruco for aruco in X.arucos if aruco.id == detection.aruco[1:]][0] <NEW_LINE> Ta = aruco.getT() <NEW_LINE> Tc = camera.getT() <NEW_LINE> T = np.matmul(inv(Tc), Ta) <NEW_LINE> xypix = points2imageFromT(T, intrinsics, Pc, dist) <NEW_LINE> distanceFourPoints = 0 <NEW_LINE> for i in range(len(xypix)): <NEW_LINE> <INDENT> distance = ((detection.corner[0][i, 0] - xypix[i, 0] ) ** 2 + (detection.corner[0][i, 1] - xypix[i, 1])**2) ** (1/2.0) <NEW_LINE> distanceFourPoints = distanceFourPoints + distance <NEW_LINE> <DEDENT> cost.append(distanceFourPoints) <NEW_LINE> if args['do'] and costFunction.counter in multiples: <NEW_LINE> <INDENT> handle.handle_scatter.set_offsets(xypix[:, :]) <NEW_LINE> handle.handle_text.set_position((xypix[0, 0], xypix[0, 1])) <NEW_LINE> X.setPlot3D() <NEW_LINE> <DEDENT> <DEDENT> if args['do'] and costFunction.counter in multiples: <NEW_LINE> <INDENT> plt.draw() <NEW_LINE> plt.pause(0.01) <NEW_LINE> <DEDENT> return cost | Cost function
| 625941beb5575c28eb68df16 |
def find_unnas(text): <NEW_LINE> <INDENT> digit_pattern = re.compile('(\d{4})') <NEW_LINE> unna_pattern = re.compile('UN|NA') <NEW_LINE> unna_spans = [(m.span(), m.group()) for m in unna_pattern.finditer(text)] <NEW_LINE> unnas = [] <NEW_LINE> for match in digit_pattern.finditer(text): <NEW_LINE> <INDENT> span = match.span() <NEW_LINE> code = match.group() <NEW_LINE> diffs = [span[0] - unna_span[0][0] for unna_span in unna_spans] <NEW_LINE> if diffs: <NEW_LINE> <INDENT> min_diff_idx = diffs.index(min(diffs)) <NEW_LINE> if min_diff_idx < 0: <NEW_LINE> <INDENT> continue <NEW_LINE> <DEDENT> prefix = unna_spans[min_diff_idx][1] <NEW_LINE> unnas.append(prefix + code) <NEW_LINE> <DEDENT> <DEDENT> return unnas | Returns a list of all UNNA numbers found in a piece of text | 625941be5e10d32532c5ee3f |
def pow(self, by: int | float | Callable | Types) -> Types: <NEW_LINE> <INDENT> return Types(self, EagerModifier(), PowModifier(by)) | This modifer is used to transform the value of the int or float value
to the power of the original value | 625941be99fddb7c1c9de2aa |
def load_file(self, fs_esp_pn): <NEW_LINE> <INDENT> assert fs_esp_pn <NEW_LINE> self.parse_esp_xml(fs_esp_pn + ".xml") | carrega os dados do procedimento de espera de um arquivo em disco
@param fs_esp_pn: pathname do arquivo em disco | 625941be091ae35668666e7b |
def _initialize(self): <NEW_LINE> <INDENT> tags = self._config.get('tags', default=JsonObject()) <NEW_LINE> if not type(tags) is dict and not type(tags) is JsonObject: <NEW_LINE> <INDENT> raise BadMonitorConfiguration('The configuration field \'tags\' is not a dict or JsonObject', 'tags') <NEW_LINE> <DEDENT> tags = JsonObject(content=tags) <NEW_LINE> tags['parser'] = 'agent-metrics' <NEW_LINE> self.log_config = { 'attributes': tags, 'parser': 'agent-metrics', 'path': 'linux_system_metrics.log', } <NEW_LINE> collector_directory = self._config.get('collectors_directory', convert_to=str, default=SystemMetricsMonitor.__get_collectors_directory()) <NEW_LINE> collector_directory = os.path.realpath(collector_directory) <NEW_LINE> if not os.path.isdir(collector_directory): <NEW_LINE> <INDENT> raise BadMonitorConfiguration('No such directory for collectors: %s' % collector_directory, 'collectors_directory') <NEW_LINE> <DEDENT> self.options = TcollectorOptions() <NEW_LINE> self.options.cdir = collector_directory <NEW_LINE> self.modules = tcollector.load_etc_dir(self.options, tags) <NEW_LINE> self.tags = tags | Performs monitor-specific initialization. | 625941be73bcbd0ca4b2bf8f |
def sign_records_users(self): <NEW_LINE> <INDENT> return Sign_Record_Controller().sign_records_users() | 查詢用戶當前是否有簽到記錄。課表為-1時,表示沒有簽到記錄 | 625941befbf16365ca6f60d6 |
def _setup_ps_client_handle(self): <NEW_LINE> <INDENT> self._ps_client = ParameterClient(host=os.environ['SYMPH_PS_SERVING_HOST'], port=os.environ['SYMPH_PS_SERVING_PORT'], agent_scope=self._agent_scope) | Initialize self._ps_client and connect it to the ps. | 625941be10dbd63aa1bd2abe |
def grid_crop_into_data(image, stride=1, buffer_size=0, format=None, parameters=None): <NEW_LINE> <INDENT> image = open_image(image) <NEW_LINE> if format is None: <NEW_LINE> <INDENT> format = image.format <NEW_LINE> <DEDENT> if parameters is None: <NEW_LINE> <INDENT> parameters = {} <NEW_LINE> convert = None <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> parameters = parameters.copy() <NEW_LINE> convert = parameters.get('convert', None) <NEW_LINE> if convert is not None: <NEW_LINE> <INDENT> del parameters['convert'] <NEW_LINE> <DEDENT> <DEDENT> for (row, column), grid_image in grid_crop(image, stride, buffer_size): <NEW_LINE> <INDENT> buf = io.BytesIO() <NEW_LINE> if convert is None: <NEW_LINE> <INDENT> grid_image.save(buf, format=format, **parameters) <NEW_LINE> grid_data = buf.getvalue() <NEW_LINE> <DEDENT> elif convert['mode']=='P' and 'colors' in convert: <NEW_LINE> <INDENT> grid_data = imgquant(grid_image, colors=convert['colors']) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> grid_image = convert_mode(grid_image, **convert) <NEW_LINE> grid_image.save(buf, format=format, **parameters) <NEW_LINE> grid_data = buf.getvalue() <NEW_LINE> <DEDENT> yield (row, column), grid_data | Crop MetaTile image into a grid and written as image files.
Same as :func:`grid_crop` except returns image file data instead of
:class:`PIL.Image.Image`.
`format` is one of supported image formats::
'JPEG'
'PNG'
'WEBP'
'TIFF'
`format` and `parameters` are passed directly to :meth:`PIL.Image.Image.save()`::
args = {
'format': 'JPEG',
'parameters': {'optimize':True, 'quality':80}
}
image.save(fp, **args)
`parameter` can also have a ``convert`` key which contains mode conversion
arguments which is passed to :meth:`PIL.Image.Image.convert()`::
args = {
'format': 'JPEG',
'parameters': {
'optimize': True,
'convert': {
'mode': 'P',
'colors': 64
}
}
}
image.convert(**args['convert'])
.. _image formats: <https://pillow.readthedocs.org/handbook/image-file-formats.html>
See :func:`~stonemason.util.postprocessing.gridcrop()` for parameter descriptions.
:param image: Image to crop, must be square.
:type image: :class:`PIL.Image.Image` or `bytes` or `file`
:param stride: Number of grid images per axis.
:type stride: int
:param buffer_size: Size of the buffer to be shaved each side in pixels,
default is 0, means no buffer is shaved.
:type buffer_size: int
:param format: Image format supported by PIL/Pillow, eg: ``JPEG``, ``PNG``,
``WEBP``. The actual list for formats depends on Pillow installation,
see `image formats`_ for a complete format list.
:type format: str
:param parameters: Image saving parameters, available list depends on
`format` selected. `image formats`_ for a complete parameter list.
:type parameters: dict
:return: A iterator of cropped image data.
:rtype: iterator | 625941becdde0d52a9e52f48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.