code
stringlengths
59
4.4k
docstring
stringlengths
5
7.69k
def _multiple_self_ref_fk_check(class_model): self_fk = [] for f in class_model._meta.concrete_fields: if f.related_model in self_fk: return True if f.related_model == class_model: self_fk.append(class_model) return False
We check whether a class has more than 1 FK reference to itself.
def get_citation_years(graph: BELGraph) -> List[Tuple[int, int]]: return create_timeline(count_citation_years(graph))
Create a citation timeline counter from the graph.
def _get_Ks(self): "Ks as an array and type-checked." Ks = as_integer_type(self.Ks) if Ks.ndim != 1: raise TypeError("Ks should be 1-dim, got shape {}".format(Ks.shape)) if Ks.min() < 1: raise ValueError("Ks should be positive; got {}".format(Ks.min())) re...
Ks as an array and type-checked.
def bake(binder, recipe_id, publisher, message, cursor): recipe = _get_recipe(recipe_id, cursor) includes = _formatter_callback_factory() binder = collate_models(binder, ruleset=recipe, includes=includes) def flatten_filter(model): return (isinstance(model, cnxepub.CompositeDocument) or ...
Given a `Binder` as `binder`, bake the contents and persist those changes alongside the published content.
def _new_from_xml(cls, xmlnode): child = xmlnode.children fields = [] while child: if child.type != "element" or child.ns().content != DATAFORM_NS: pass elif child.name == "field": fields.append(Field._new_from_xml(child)) child...
Create a new `Item` object from an XML element. :Parameters: - `xmlnode`: the XML element. :Types: - `xmlnode`: `libxml2.xmlNode` :return: the object created. :returntype: `Item`
def use(cls, name, method: [str, Set, List], url=None): if not isinstance(method, (str, list, set, tuple)): raise BaseException('Invalid type of method: %s' % type(method).__name__) if isinstance(method, str): method = {method} cls._interface[name] = [{'method': method, '...
interface helper function
def node_has_namespaces(node: BaseEntity, namespaces: Set[str]) -> bool: ns = node.get(NAMESPACE) return ns is not None and ns in namespaces
Pass for nodes that have one of the given namespaces.
def check_write_permissions(file): try: open(file, 'a') except IOError: print("Can't open file {}. " "Please grant write permissions or change the path in your config".format(file)) sys.exit(1)
Check if we can write to the given file Otherwise since we might detach the process to run in the background we might never find out that writing failed and get an ugly exit message on startup. For example: ERROR: Child exited immediately with non-zero exit code 127 So we catch this error upfront ...
def __prepare_domain(data): if not data: raise JIDError("Domain must be given") data = unicode(data) if not data: raise JIDError("Domain must be given") if u'[' in data: if data[0] == u'[' and data[-1] == u']': try: ...
Prepare domainpart of the JID. :Parameters: - `data`: Domain part of the JID :Types: - `data`: `unicode` :raise JIDError: if the domain name is too long.
def SInt(value, width): return Operators.ITEBV(width, Bit(value, width - 1) == 1, GetNBits(value, width) - 2**width, GetNBits(value, width))
Convert a bitstring `value` of `width` bits to a signed integer representation. :param value: The value to convert. :type value: int or long or BitVec :param int width: The width of the bitstring to consider :return: The converted value :rtype int or long or BitVec
def link_user(self, enterprise_customer, user_email): try: existing_user = User.objects.get(email=user_email) self.get_or_create(enterprise_customer=enterprise_customer, user_id=existing_user.id) except User.DoesNotExist: PendingEnterpriseCustomerUser.objects.get_or_c...
Link user email to Enterprise Customer. If :class:`django.contrib.auth.models.User` instance with specified email does not exist, :class:`.PendingEnterpriseCustomerUser` instance is created instead.
def setup_figure(figsize, as_subplot): if not as_subplot: fig = plt.figure(figsize=figsize) return fig
Setup a figure for plotting an image. Parameters ----------- figsize : (int, int) The size of the figure in (rows, columns). as_subplot : bool If the figure is a subplot, the setup_figure function is omitted to ensure that each subplot does not create a \ new figure and so that ...
def edit(self, config, etag): data = self._json_encode(config) headers = self._default_headers() if etag is not None: headers["If-Match"] = etag return self._request(self.name, ok_status=None, data=data, ...
Update template config for specified template name. .. __: https://api.go.cd/current/#edit-template-config Returns: Response: :class:`gocd.api.response.Response` object
def basen_to_integer(self, X, cols, base): out_cols = X.columns.values.tolist() for col in cols: col_list = [col0 for col0 in out_cols if str(col0).startswith(str(col))] insert_at = out_cols.index(col_list[0]) if base == 1: value_array = np.array([int(...
Convert basen code as integers. Parameters ---------- X : DataFrame encoded data cols : list-like Column names in the DataFrame that be encoded base : int The base of transform Returns ------- numerical: DataFrame
def pr_lmean(self): r precision = self.precision() recall = self.recall() if not precision or not recall: return 0.0 elif precision == recall: return precision return (precision - recall) / (math.log(precision) - math.log(recall))
r"""Return logarithmic mean of precision & recall. The logarithmic mean is: 0 if either precision or recall is 0, the precision if they are equal, otherwise :math:`\frac{precision - recall} {ln(precision) - ln(recall)}` Cf. https://en.wikipedia.org/wiki/Logarithmic_mean...
def plot_cdf(self, graphing_library='matplotlib'): graphed = False for percentile_csv in self.percentiles_files: csv_filename = os.path.basename(percentile_csv) column = self.csv_column_map[percentile_csv.replace(".percentiles.", ".")] if not self.check_important_sub_metrics(column): c...
plot CDF for important sub-metrics
def Tb(CASRN, AvailableMethods=False, Method=None, IgnoreMethods=[PSAT_DEFINITION]): r def list_methods(): methods = [] if CASRN in CRC_inorganic_data.index and not np.isnan(CRC_inorganic_data.at[CASRN, 'Tb']): methods.append(CRC_INORG) if CASRN in CRC_organic_data.index and ...
r'''This function handles the retrieval of a chemical's boiling point. Lookup is based on CASRNs. Will automatically select a data source to use if no Method is provided; returns None if the data is not available. Prefered sources are 'CRC Physical Constants, organic' for organic chemicals, and 'CR...
def pklc_fovcatalog_objectinfo( pklcdir, fovcatalog, fovcatalog_columns=[0,1,2, 6,7, 8,9, 10,11, 13,14,15,16, 17,18,19, 20,21], ...
Adds catalog info to objectinfo key of all pklcs in lcdir. If fovcatalog, fovcatalog_columns, fovcatalog_colnames are provided, uses them to find all the additional information listed in the fovcatalog_colname keys, and writes this info to the objectinfo key of each lcdict. This makes it easier for ast...
def _pkl_periodogram(lspinfo, plotdpi=100, override_pfmethod=None): pgramylabel = PLOTYLABELS[lspinfo['method']] periods = lspinfo['periods'] lspvals = lspinfo['lspvals'] bestperiod = lspinfo['bestperiod'] nbestperiods = lspinfo['nbestperiods'] nbestlspv...
This returns the periodogram plot PNG as base64, plus info as a dict. Parameters ---------- lspinfo : dict This is an lspinfo dict containing results from a period-finding function. If it's from an astrobase period-finding function in periodbase, this will already be in the correct...
def __groupchat_message(self,stanza): fr=stanza.get_from() key=fr.bare().as_unicode() rs=self.rooms.get(key) if not rs: self.__logger.debug("groupchat message from unknown source") return False rs.process_groupchat_message(stanza) return True
Process a groupchat message from a MUC room. :Parameters: - `stanza`: the stanza received. :Types: - `stanza`: `Message` :return: `True` if the message was properly recognized as directed to one of the managed rooms, `False` otherwise. :returntype: `...
def is_self(addr): ips = [] for i in netifaces.interfaces(): entry = netifaces.ifaddresses(i) if netifaces.AF_INET in entry: for ipv4 in entry[netifaces.AF_INET]: if "addr" in ipv4: ips.append(ipv4["addr"]) return addr in ips or addr == get_self_hostname()
check if this host is this addr
def addNoise(input, noise=0.1, doForeground=True, doBackground=True): if doForeground and doBackground: return numpy.abs(input - (numpy.random.random(input.shape) < noise)) else: if doForeground: return numpy.logical_and(input, numpy.random.random(input.shape) > noise) if doBackground: retu...
Add noise to the given input. Parameters: ----------------------------------------------- input: the input to add noise to noise: how much noise to add doForeground: If true, turn off some of the 1 bits in the input doBackground: If true, turn on some of the 0 bits in the input
def load(directory_name, module_name): directory_name = os.path.expanduser(directory_name) if os.path.isdir(directory_name) and directory_name not in sys.path: sys.path.append(directory_name) try: return importlib.import_module(module_name) except ImportError: pass
Try to load and return a module Will add DIRECTORY_NAME to sys.path and tries to import MODULE_NAME. For example: load("~/.yaz", "yaz_extension")
def merge_ordered(ordereds: typing.Iterable[typing.Any]) -> typing.Iterable[typing.Any]: seen_set = set() add_seen = seen_set.add return reversed(tuple(map( lambda obj: add_seen(obj) or obj, filterfalse( seen_set.__contains__, chain.from_iterable(map(reversed, reverse...
Merge multiple ordered so that within-ordered order is preserved
def ConsumeIdentifier(self): result = self.token if not self._IDENTIFIER.match(result): raise self._ParseError('Expected identifier.') self.NextToken() return result
Consumes protocol message field identifier. Returns: Identifier string. Raises: ParseError: If an identifier couldn't be consumed.
def convert_elementwise_sub( params, w_name, scope_name, inputs, layers, weights, names ): print('Converting elementwise_sub ...') model0 = layers[inputs[0]] model1 = layers[inputs[1]] if names == 'short': tf_name = 'S' + random_string(7) elif names == 'keep': tf_name = w_name ...
Convert elementwise subtraction. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short nam...
def flare_model(flareparams, times, mags, errs): (amplitude, flare_peak_time, rise_gaussian_stdev, decay_time_constant) = flareparams zerolevel = np.median(mags) modelmags = np.full_like(times, zerolevel) modelmags[times < flare_peak_time] = ( mags[times < flare_peak_time] + amplitu...
This is a flare model function, similar to Kowalski+ 2011. From the paper by Pitkin+ 2014: http://adsabs.harvard.edu/abs/2014MNRAS.445.2268P Parameters ---------- flareparams : list of float This defines the flare model:: [amplitude, flare_peak_time, ...
def _coeff4(N, a0, a1, a2, a3): if N == 1: return ones(1) n = arange(0, N) N1 = N - 1. w = a0 -a1*cos(2.*pi*n / N1) + a2*cos(4.*pi*n / N1) - a3*cos(6.*pi*n / N1) return w
a common internal function to some window functions with 4 coeffs For the blackmna harris for instance, the results are identical to octave if N is odd but not for even values...if n =0 whatever N is, the w(0) must be equal to a0-a1+a2-a3, which is the case here, but not in octave...
def update(self): if self.delay > 0: self.delay -= 1; return if self.fi == 0: if len(self.q) == 1: self.fn = float("inf") else: self.fn = len(self.q[self.i]) / self.speed self.fn = max(self.fn, self.mf) ...
Rotates the queued texts and determines display time.
def _get_zoom(zoom, input_raster, pyramid_type): if not zoom: minzoom = 1 maxzoom = get_best_zoom_level(input_raster, pyramid_type) elif len(zoom) == 1: minzoom = zoom[0] maxzoom = zoom[0] elif len(zoom) == 2: if zoom[0] < zoom[1]: minzoom = zoom[0] ...
Determine minimum and maximum zoomlevel.
def calculate(self, T, method): r if method == CRC_INORG_S: Vms = self.CRC_INORG_S_Vm elif method in self.tabular_data: Vms = self.interpolate(T, method) return Vms
r'''Method to calculate the molar volume of a solid at tempearture `T` with a given method. This method has no exception handling; see `T_dependent_property` for that. Parameters ---------- T : float Temperature at which to calculate molar volume, [K] ...
def get_command(self, ctx, name): if name in misc.__all__: return getattr(misc, name) try: resource = tower_cli.get_resource(name) return ResSubcommand(resource) except ImportError: pass secho('No such command: %s.' % name, fg='red', bold=T...
Given a command identified by its name, import the appropriate module and return the decorated command. Resources are automatically commands, but if both a resource and a command are defined, the command takes precedence.
def extract_execution_state(self, topology): execution_state = topology.execution_state executionState = { "cluster": execution_state.cluster, "environ": execution_state.environ, "role": execution_state.role, "jobname": topology.name, "submission_time": execution_state.su...
Returns the repesentation of execution state that will be returned from Tracker.
def run(self, x, y, lr=0.01, train_epochs=1000, test_epochs=1000, idx=0, verbose=None, **kwargs): verbose = SETTINGS.get_default(verbose=verbose) optim = th.optim.Adam(self.parameters(), lr=lr) running_loss = 0 teloss = 0 for i in range(train_epochs + test_epochs): op...
Run the GNN on a pair x,y of FloatTensor data.
def _byteify(data, ignore_dicts=False): if isinstance(data, unicode): return data.encode("utf-8") if isinstance(data, list): return [_byteify(item, ignore_dicts=True) for item in data] if isinstance(data, dict) and not ignore_dicts: return { _byteify(key, ignore_dicts=Tru...
converts unicode to utf-8 when reading in json files
def contains_sequence(self, *items): if len(items) == 0: raise ValueError('one or more args must be given') else: try: for i in xrange(len(self.val) - len(items) + 1): for j in xrange(len(items)): if self.val[i+j] != ite...
Asserts that val contains the given sequence of items in order.
def is_dicom_file(filepath): if not os.path.exists(filepath): raise IOError('File {} not found.'.format(filepath)) filename = os.path.basename(filepath) if filename == 'DICOMDIR': return False try: _ = dicom.read_file(filepath) except Exception as exc: log.debug('Chec...
Tries to read the file using dicom.read_file, if the file exists and dicom.read_file does not raise and Exception returns True. False otherwise. :param filepath: str Path to DICOM file :return: bool
def CALLDATALOAD(self, offset): if issymbolic(offset): if solver.can_be_true(self._constraints, offset == self._used_calldata_size): self.constraints.add(offset == self._used_calldata_size) raise ConcretizeArgument(1, policy='SAMPLED') self._use_calldata(offset, 3...
Get input data of current environment
def _assure_dir(self): try: os.makedirs(self._state_dir) except OSError as err: if err.errno != errno.EEXIST: raise
Make sure the state directory exists
def _addRoute(self, f, matcher): self._routes.append((f.func_name, f, matcher))
Add a route handler and matcher to the collection of possible routes.
def plot_drawdown_periods(returns, top=10, ax=None, **kwargs): if ax is None: ax = plt.gca() y_axis_formatter = FuncFormatter(utils.two_dec_places) ax.yaxis.set_major_formatter(FuncFormatter(y_axis_formatter)) df_cum_rets = ep.cum_returns(returns, starting_value=1.0) df_drawdowns = timeserie...
Plots cumulative returns highlighting top drawdown periods. Parameters ---------- returns : pd.Series Daily returns of the strategy, noncumulative. - See full explanation in tears.create_full_tear_sheet. top : int, optional Amount of top drawdowns periods to plot (default 10). ...
def calculate(self, T, P, zs, ws, method): r if method == SIMPLE: sigmas = [i(T) for i in self.SurfaceTensions] return mixing_simple(zs, sigmas) elif method == DIGUILIOTEJA: return Diguilio_Teja(T=T, xs=zs, sigmas_Tb=self.sigmas_Tb, ...
r'''Method to calculate surface tension of a liquid mixture at temperature `T`, pressure `P`, mole fractions `zs` and weight fractions `ws` with a given method. This method has no exception handling; see `mixture_property` for that. Parameters ---------- T : fl...
def mmGetMetricSequencesPredictedActiveCellsShared(self): self._mmComputeTransitionTraces() numSequencesForCell = defaultdict(lambda: 0) for predictedActiveCells in ( self._mmData["predictedActiveCellsForSequence"].values()): for cell in predictedActiveCells: numSequencesForCell[cell...
Metric for number of sequences each predicted => active cell appears in Note: This metric is flawed when it comes to high-order sequences. @return (Metric) metric
def template_apiserver_hcl(cl_args, masters, zookeepers): single_master = masters[0] apiserver_config_template = "%s/standalone/templates/apiserver.template.hcl" \ % cl_args["config_path"] apiserver_config_actual = "%s/standalone/resources/apiserver.hcl" % cl_args["config_path"] re...
template apiserver.hcl
def multiselect(self, window_name, object_name, row_text_list, partial_match=False): object_handle = self._get_object_handle(window_name, object_name) if not object_handle.AXEnabled: raise LdtpServerException(u"Object %s state disabled" % object_name) object_handle.activate() ...
Select multiple row @param window_name: Window name to type in, either full name, LDTP's name convention, or a Unix glob. @type window_name: string @param object_name: Object name to type in, either full name, LDTP's name convention, or a Unix glob. @type object_name: s...
def HHV(self, HHV): self._HHV = HHV if self.isCoal: self._DH298 = self._calculate_DH298_coal()
Set the higher heating value of the stream to the specified value, and recalculate the formation enthalpy of the daf coal. :param HHV: MJ/kg coal, higher heating value
def get_variables_with_name(name=None, train_only=True, verbose=False): if name is None: raise Exception("please input a name") logging.info(" [*] geting variables with %s" % name) if train_only: t_vars = tf.trainable_variables() else: t_vars = tf.global_variables() d_vars =...
Get a list of TensorFlow variables by a given name scope. Parameters ---------- name : str Get the variables that contain this name. train_only : boolean If Ture, only get the trainable variables. verbose : boolean If True, print the information of all variables. Return...
def SLOAD(self, offset): storage_address = self.address self._publish('will_evm_read_storage', storage_address, offset) value = self.world.get_storage_data(storage_address, offset) self._publish('did_evm_read_storage', storage_address, offset, value) return value
Load word from storage
def add_item(self, item, replace = False): if item.jid in self._jids: if replace: self.remove_item(item.jid) else: raise ValueError("JID already in the roster") index = len(self._items) self._items.append(item) self._jids[item.jid] ...
Add an item to the roster. This will not automatically update the roster on the server. :Parameters: - `item`: the item to add - `replace`: if `True` then existing item will be replaced, otherwise a `ValueError` will be raised on conflict :Types: ...
def setup_editor(self, editor): editor.cursorPositionChanged.connect(self.on_cursor_pos_changed) try: m = editor.modes.get(modes.GoToAssignmentsMode) except KeyError: pass else: assert isinstance(m, modes.GoToAssignmentsMode) m.out_of_doc.c...
Setup the python editor, run the server and connect a few signals. :param editor: editor to setup.
def init(self): self.es.indices.create(index=self.params['index'], ignore=400)
Create an Elasticsearch index if necessary
def generate_thumbnail(source, outname, box, fit=True, options=None, thumb_fit_centering=(0.5, 0.5)): logger = logging.getLogger(__name__) img = _read_image(source) original_format = img.format if fit: img = ImageOps.fit(img, box, PILImage.ANTIALIAS, ...
Create a thumbnail image.
def _cacheSequenceInfoType(self): hasReset = self.resetFieldName is not None hasSequenceId = self.sequenceIdFieldName is not None if hasReset and not hasSequenceId: self._sequenceInfoType = self.SEQUENCEINFO_RESET_ONLY self._prevSequenceId = 0 elif not hasReset and hasSequenceId: self....
Figure out whether reset, sequenceId, both or neither are present in the data. Compute once instead of every time. Taken from filesource.py
def drawCircle(self, x0, y0, r, color=None): md.draw_circle(self.set, x0, y0, r, color)
Draw a circle in an RGB color, with center x0, y0 and radius r.
def mmap(self, addr, size, perms, data_init=None, name=None): assert addr is None or isinstance(addr, int), 'Address shall be concrete' self.cpu._publish('will_map_memory', addr, size, perms, None, None) if addr is not None: assert addr < self.memory_size, 'Address too big' ...
Creates a new mapping in the memory address space. :param addr: the starting address (took as hint). If C{addr} is C{0} the first big enough chunk of memory will be selected as starting address. :param size: the length of the mapping. :param perms: the access permissions to...
def task_ids(self): if not self.id: raise WorkflowError('Workflow is not running. Cannot get task IDs.') if self.batch_values: raise NotImplementedError("Query Each Workflow Id within the Batch Workflow for task IDs.") wf = self.workflow.get(self.id) return [task...
Get the task IDs of a running workflow Args: None Returns: List of task IDs
def slice_clip(filename, start, stop, n_samples, sr, mono=True): with psf.SoundFile(str(filename), mode='r') as soundf: n_target = stop - start soundf.seek(start) y = soundf.read(n_target).T if mono: y = librosa.to_mono(y) y = librosa.resample(y, soundf.samplerate...
Slice a fragment of audio from a file. This uses pysoundfile to efficiently seek without loading the entire stream. Parameters ---------- filename : str Path to the input file start : int The sample index of `filename` at which the audio fragment should start stop : int ...
def runModelGivenBaseAndParams(modelID, jobID, baseDescription, params, predictedField, reportKeys, optimizeKey, jobsDAO, modelCheckpointGUID, logLevel=None, predictionCacheMaxRecords=None): from nupic.swarming.ModelRunner import OPFModelRunner logger = logging.getLogger('com.numenta.nupic.h...
This creates an experiment directory with a base.py description file created from 'baseDescription' and a description.py generated from the given params dict and then runs the experiment. Parameters: ------------------------------------------------------------------------- modelID: ID for this m...
def update_desc_rcin_path(desc,sibs_len,pdesc_level): psibs_len = pdesc_level.__len__() parent_breadth = desc['parent_breadth_path'][-1] if(desc['sib_seq']==(sibs_len - 1)): if(parent_breadth==(psibs_len -1)): pass else: parent_rsib_breadth = parent_breadth + 1 ...
rightCousin nextCousin rightCin nextCin rcin ncin parents are neighbors,and on the right
def Integer(name, base=10, encoding=None): def _match(request, value): return name, query.Integer( value, base=base, encoding=contentEncoding(request.requestHeaders, encoding)) return _match
Match an integer route parameter. :type name: `bytes` :param name: Route parameter name. :type base: `int` :param base: Base to interpret the value in. :type encoding: `bytes` :param encoding: Default encoding to assume if the ``Content-Type`` header is lacking one. :return: `...
def isMine(self, scriptname): suffix = os.path.splitext(scriptname)[1].lower() if suffix.startswith('.'): suffix = suffix[1:] return self.suffix == suffix
Primitive queuing system detection; only looks at suffix at the moment.
def _find_zero(cpu, constrs, ptr): offset = 0 while True: byt = cpu.read_int(ptr + offset, 8) if issymbolic(byt): if not solver.can_be_true(constrs, byt != 0): break else: if byt == 0: break offset += 1 return offset
Helper for finding the closest NULL or, effectively NULL byte from a starting address. :param Cpu cpu: :param ConstraintSet constrs: Constraints for current `State` :param int ptr: Address to start searching for a zero from :return: Offset from `ptr` to first byte that is 0 or an `Expression` that must...
def camel2word(string): def wordize(match): return ' ' + match.group(1).lower() return string[0] + re.sub(r'([A-Z])', wordize, string[1:])
Covert name from CamelCase to "Normal case". >>> camel2word('CamelCase') 'Camel case' >>> camel2word('CaseWithSpec') 'Case with spec'
def python_value(self, value): value = super(OrderedUUIDField, self).python_value(value) u = binascii.b2a_hex(value) value = u[8:16] + u[4:8] + u[0:4] + u[16:22] + u[22:32] return UUID(value.decode())
Convert binary blob to UUID instance
def aggregate(self, clazz, new_col, *args): if is_callable(clazz) and not is_none(new_col) and has_elements(*args): return self.__do_aggregate(clazz, new_col, *args)
Aggregate the rows of the DataFrame into a single value. :param clazz: name of a class that extends class Callable :type clazz: class :param new_col: name of the new column :type new_col: str :param args: list of column names of the object that function should be applie...
def row(self, idx): return DataFrameRow(idx, [x[idx] for x in self], self.colnames)
Returns DataFrameRow of the DataFrame given its index. :param idx: the index of the row in the DataFrame. :return: returns a DataFrameRow
def set_permissions(self): r = self.local_renderer for path in r.env.paths_owned: r.env.path_owned = path r.sudo('chown {celery_daemon_user}:{celery_daemon_user} {celery_path_owned}')
Sets ownership and permissions for Celery-related files.
def bresenham_line(setter, x0, y0, x1, y1, color=None, colorFunc=None): steep = abs(y1 - y0) > abs(x1 - x0) if steep: x0, y0 = y0, x0 x1, y1 = y1, x1 if x0 > x1: x0, x1 = x1, x0 y0, y1 = y1, y0 dx = x1 - x0 dy = abs(y1 - y0) err = dx / 2 if y0 < y1: ys...
Draw line from point x0,y0 to x,1,y1. Will draw beyond matrix bounds.
def sim_minkowski(src, tar, qval=2, pval=1, alphabet=None): return Minkowski().sim(src, tar, qval, pval, alphabet)
Return normalized Minkowski similarity of two strings. This is a wrapper for :py:meth:`Minkowski.sim`. Parameters ---------- src : str Source string (or QGrams/Counter objects) for comparison tar : str Target string (or QGrams/Counter objects) for comparison qval : int ...
def dump(self, itemkey, filename=None, path=None): if not filename: filename = self.item(itemkey)["data"]["filename"] if path: pth = os.path.join(path, filename) else: pth = filename file = self.file(itemkey) if self.snapshot: self....
Dump a file attachment to disk, with optional filename and path
def split_storage(path, default='osfstorage'): path = norm_remote_path(path) for provider in KNOWN_PROVIDERS: if path.startswith(provider + '/'): if six.PY3: return path.split('/', maxsplit=1) else: return path.split('/', 1) return (default, pa...
Extract storage name from file path. If a path begins with a known storage provider the name is removed from the path. Otherwise the `default` storage provider is returned and the path is not modified.
def setup_a_alpha_and_derivatives(self, i, T=None): r self.a, self.Tc, self.S1, self.S2 = self.ais[i], self.Tcs[i], self.S1s[i], self.S2s[i]
r'''Sets `a`, `S1`, `S2` and `Tc` for a specific component before the pure-species EOS's `a_alpha_and_derivatives` method is called. Both are called by `GCEOSMIX.a_alpha_and_derivatives` for every component.
def application_exists(self): response = self.ebs.describe_applications(application_names=[self.app_name]) return len(response['DescribeApplicationsResponse']['DescribeApplicationsResult']['Applications']) > 0
Returns whether or not the given app_name exists
def discard(self, pid=None): pid = pid or self.pid with db.session.begin_nested(): before_record_update.send( current_app._get_current_object(), record=self) _, record = self.fetch_published() self.model.json = deepcopy(record.model.json) s...
Discard deposit changes. #. The signal :data:`invenio_records.signals.before_record_update` is sent before the edit execution. #. It restores the last published version. #. The following meta information are saved inside the deposit: .. code-block:: python de...
def is_img(obj): try: get_data = getattr(obj, 'get_data') get_affine = getattr(obj, 'get_affine') return isinstance(get_data, collections.Callable) and \ isinstance(get_affine, collections.Callable) except AttributeError: return False
Check for get_data and get_affine method in an object Parameters ---------- obj: any object Tested object Returns ------- is_img: boolean True if get_data and get_affine methods are present and callable, False otherwise.
def create_helpingmaterial(project_id, info, media_url=None, file_path=None): try: helping = dict( project_id=project_id, info=info, media_url=None, ) if file_path: files = {'file': open(file_path, 'rb')} payload = {'project_id': pr...
Create a helping material for a given project ID. :param project_id: PYBOSSA Project ID :type project_id: integer :param info: PYBOSSA Helping Material info JSON field :type info: dict :param media_url: URL for a media file (image, video or audio) :type media_url: string :param file_path: F...
def _socket_readlines(self, blocking=False): try: self.sock.setblocking(0) except socket.error as e: self.logger.error("socket error when setblocking(0): %s" % str(e)) raise ConnectionDrop("connection dropped") while True: short_buf = b'' ...
Generator for complete lines, received from the server
def position_rates(self): return [self.ode_obj.getPositionRate(i) for i in range(self.LDOF)]
List of position rates for linear degrees of freedom.
def _derive_checksum(self, s): checksum = hashlib.sha256(bytes(s, "ascii")).hexdigest() return checksum[:4]
Derive the checksum :param str s: Random string for which to derive the checksum
def get(ctx): user, project_name, _group = get_project_group_or_local(ctx.obj.get('project'), ctx.obj.get('group')) try: response = PolyaxonClient().experiment_group.get_experiment_group( user, project_name, _group) cache.ca...
Get experiment group by uuid. Uses [Caching](/references/polyaxon-cli/#caching) Examples: \b ```bash $ polyaxon group -g 13 get ```
def load_file_list(path=None, regx='\.jpg', printable=True, keep_prefix=False): r if path is None: path = os.getcwd() file_list = os.listdir(path) return_list = [] for _, f in enumerate(file_list): if re.search(regx, f): return_list.append(f) if keep_prefix: f...
r"""Return a file list in a folder by given a path and regular expression. Parameters ---------- path : str or None A folder path, if `None`, use the current directory. regx : str The regx of file name. printable : boolean Whether to print the files infomation. keep_pref...
def wait_until_final(self, poll_interval=1, timeout=60): start_time = time.time() elapsed = 0 while (self.status != "complete" and (timeout <= 0 or elapsed < timeout)): time.sleep(poll_interval) self.refresh() elapsed = time.time() - start_time
It will poll the URL to grab the latest status resource in a given timeout and time interval. Args: poll_interval (int): how often to poll the status service. timeout (int): how long to poll the URL until giving up. Use <= 0 to wait forever
def TP_dependent_property(self, T, P): r if self.method_P: if self.test_method_validity_P(T, P, self.method_P): try: prop = self.calculate_P(T, P, self.method_P) if self.test_property_validity(prop): return prop ...
r'''Method to calculate the property with sanity checking and without specifying a specific method. `select_valid_methods_P` is used to obtain a sorted list of methods to try. Methods are then tried in order until one succeeds. The methods are allowed to fail, and their results are check...
def save(self,callit="misc",closeToo=True,fullpath=False): if fullpath is False: fname=self.abf.outPre+"plot_"+callit+".jpg" else: fname=callit if not os.path.exists(os.path.dirname(fname)): os.mkdir(os.path.dirname(fname)) plt.savefig(fname) s...
save the existing figure. does not close it.
def get_last_commit_line(git_path=None): if git_path is None: git_path = GIT_PATH output = check_output([git_path, "log", "--pretty=format:'%ad %h %s'", "--date=short", "-n1"]) return output.strip()[1:-1]
Get one-line description of HEAD commit for repository in current dir.
def might_need_auth(f): @wraps(f) def wrapper(cli_args): try: return_value = f(cli_args) except UnauthorizedException as e: config = config_from_env(config_from_file()) username = _get_username(cli_args, config) if username is None: ...
Decorate a CLI function that might require authentication. Catches any UnauthorizedException raised, prints a helpful message and then exits.
def validateRequest(self, uri, postVars, expectedSignature): s = uri for k, v in sorted(postVars.items()): s += k + v return (base64.encodestring(hmac.new(self.auth_token, s, sha1).digest()).\ strip() == expectedSignature)
validate a request from plivo uri: the full URI that Plivo requested on your server postVars: post vars that Plivo sent with the request expectedSignature: signature in HTTP X-Plivo-Signature header returns true if the request passes validation, false if not
def Tok(kind, loc=None): @llrule(loc, lambda parser: [kind]) def rule(parser): return parser._accept(kind) return rule
A rule that accepts a token of kind ``kind`` and returns it, or returns None.
def _append_element(self, render_func, pe): self._render_funcs.append(render_func) self._elements.append(pe)
Append a render function and the parameters to pass an equivilent PathElement, or the PathElement itself.
async def handle_message(self, message, filters): data = self._unpack_message(message) logger.debug(data) if data.get('type') == 'error': raise SlackApiError( data.get('error', {}).get('msg', str(data)) ) elif self.message_is_to_me(data): ...
Handle an incoming message appropriately. Arguments: message (:py:class:`aiohttp.websocket.Message`): The incoming message to handle. filters (:py:class:`list`): The filters to apply to incoming messages.
def _parse_field_value(line): if line.startswith(':'): return None, None if ':' not in line: return line, '' field, value = line.split(':', 1) value = value[1:] if value.startswith(' ') else value return field, value
Parse the field and value from a line.
def format(obj, options): formatters = { float_types: lambda x: '{:.{}g}'.format(x, options.digits), } for _types, fmtr in formatters.items(): if isinstance(obj, _types): return fmtr(obj) try: if six.PY2 and isinstance(obj, six.string_types): return str(ob...
Return a string representation of the Python object Args: obj: The Python object options: Format options
def _ref(self, param, base_name=None): name = base_name or param.get('title', '') or param.get('name', '') pointer = self.json_pointer + name self.parameter_registry[name] = param return {'$ref': pointer}
Store a parameter schema and return a reference to it. :param schema: Swagger parameter definition. :param base_name: Name that should be used for the reference. :rtype: dict :returns: JSON pointer to the original parameter definition.
def store_drop(cls, resource: str, session: Optional[Session] = None) -> 'Action': action = cls.make_drop(resource) _store_helper(action, session=session) return action
Store a "drop" event. :param resource: The normalized name of the resource to store Example: >>> from bio2bel.models import Action >>> Action.store_drop('hgnc')
def create_task(self): return self.spec_class(self.spec, self.get_task_spec_name(), lane=self.get_lane(), description=self.node.get('name', None))
Create an instance of the task appropriately. A subclass can override this method to get extra information from the node.
def str_to_rgb(self, str): str = str.lower() for ch in "_- ": str = str.replace(ch, "") if named_colors.has_key(str): return named_colors[str] for suffix in ["ish", "ed", "y", "like"]: str = re.sub("(.*?)" + suffix + "$", "\\1", str) str = re.s...
Returns RGB values based on a descriptive string. If the given str is a named color, return its RGB values. Otherwise, return a random named color that has str in its name, or a random named color which name appears in str. Specific suffixes (-ish, -ed, -y and -like) are recognised ...
def install_setuptools(python_cmd='python', use_sudo=True): setuptools_version = package_version('setuptools', python_cmd) distribute_version = package_version('distribute', python_cmd) if setuptools_version is None: _install_from_scratch(python_cmd, use_sudo) else: if distribute_version...
Install the latest version of `setuptools`_. :: import burlap burlap.python_setuptools.install_setuptools()
def readheaders(self): self.dict = {} self.unixfrom = '' self.headers = lst = [] self.status = '' headerseen = "" firstline = 1 startofline = unread = tell = None if hasattr(self.fp, 'unread'): unread = self.fp.unread elif self.seekable...
Read header lines. Read header lines up to the entirely blank line that terminates them. The (normally blank) line that ends the headers is skipped, but not included in the returned list. If a non-header line ends the headers, (which is an error), an attempt is made to backspace over i...
def remove_highlight_nodes(graph: BELGraph, nodes: Optional[Iterable[BaseEntity]]=None) -> None: for node in graph if nodes is None else nodes: if is_node_highlighted(graph, node): del graph.node[node][NODE_HIGHLIGHT]
Removes the highlight from the given nodes, or all nodes if none given. :param graph: A BEL graph :param nodes: The list of nodes to un-highlight
def do_vars(self, line): if self.bot._vars: max_name_len = max([len(name) for name in self.bot._vars]) for i, (name, v) in enumerate(self.bot._vars.items()): keep = i < len(self.bot._vars) - 1 self.print_response("%s = %s" % (name.ljust(max_name_len), v.va...
List bot variables and values