Analyse the timing in the lightcurves

The routines here are for non-periodic timing, see YSOVAR.lombscargle for periodograms.

YSOVAR.lightcurves.ARmodel(t, val, degree=2, scale=0.5)

Fit an auto-regressive (AR) model to data and retrn some parameters

The inout data can be irregularly binned, it will be resampled on a regular grid with bin-width scale.

Parameters:

t : np.ndarray

input times

val : np.ndarray

input values

degree : int

degree of AR model

scale : float

binning ofthe resampled lightcurve

Returns:

params : list of (degree + 1) floats

parameters of the model

sigma2 : float

sigma of the Gaussian component of the model

aic : float

value of the Akaike information criterion

YSOVAR.lightcurves.calc_poly_chi(data, bands=['36', '45'])

Fits polynoms of degree 1..6 to all lightcurves in data

One way to adress if a lightcurve is “smooth” is to fit a low-order polynomial. This routine fits polynomial of degree 1 to 6 to each IRAC1 and IRAC 2 lightcurve and calculates the chi^2 value for each fit.

Parameters:

data : astropy.table.Table

structure with the defined object properties.

bands : list of strings

Band identifiers, e.g. [‘36’, ‘45’], can also be a list with one entry, e.g. [‘36’]

YSOVAR.lightcurves.combinations_with_replacement(iterable, r)

defined here for backwards compatibility From python 2.7 on its included in itertools

YSOVAR.lightcurves.corr_points(x, data1, data2)

Make all combinations of two variables at times x

Parameters:

x : np.ndarray

independend variable (x-axis), e.g. time of a lightcurve

data1, data2 : np.ndarray

dependent variables (y-axis), e.g. flux for a lightcurve

Returns:

diff_x : np.ndarray

all possible intervals of the independent variable

d_2 : 2-d np.ndarray

corresponding values of dependent variables. Array as shape (N, 2), where N is the number of combinations.

YSOVAR.lightcurves.delta_corr_points(x, data1, data2)

correlate two variables sampled at the same (possible irregular) time points

Parameters:

x : np.ndarray

independend variable (x-axis), e.g. time of a lightcurve

data1, data2 : np.ndarray

dependent variables (y-axis), e.g. flux for a lightcurve

Returns:

diff_x : np.ndarray

all possible intervals of the independent variable

d_2 : np.ndarray

corresponding correlation in the dependent variables

..note:: :

Essentially, this is a correltation function for irregularly sampled data

YSOVAR.lightcurves.delta_delta_points(data1, data2)

make a list of scatter delta_data1 vs delta_data2 for all combinations of

E.g. this can be used to calculate delta_T vs. delta mag

Parameters:

data1 : np.ndarray

independend variable (x-axis), e.g. time of a lightcurve

data2 : np.ndarray

dependent variable (y-axis), e.g. flux for a lightcurve

Returns:

diff_1 : np.ndarray

all possible intervals of the independent variable

diff_2 : np.ndarray

corresponding differences in the depended variable

..note:: :

Essentially, this is an autocorrelation for irregularly sampled data

YSOVAR.lightcurves.describe_autocorr(t, val, scale=0.1, autocorr_scale=0.5, autosum_limit=1.75)

describe the timescales of time series using an autocorrelation function

#This procedure takes an unevenly sampled time series and computes #the autocorrelation function from that. The result is binned in time bins #of width scale and three numbers are derived from the shape of the #autocorrelation function.

This is based on the definitions used by Maria for the Orion paper. A visual definition is given on the YSOVAR wiki (restriced access).

Parameters:

t : np.ndarray

times of time series

val : np.ndarray

values of time series

scale : float

In order to accept irregular time series, the calculated autocorrelation needs to be binned in time. scale sets the width of those bins.

autocorr_scale : float

coherence_time is the time when the autocorrelation falls below autocorr_scale. 0.5 is a common value, but for sparse sampling 0.2 might give better results.

autosum_limit : float

The autocorrelation function is also calculated with a time binning of scale. To get a robust measure of this, the function calculate the timescale for the cumularitve sum of the autocorrelation function to exceed autosum_limit.

Returns:

cumsumtime : float

time when the cumulative sum of a finely binned autocorrelation function exceeds autosum_limit for the first time; np.inf is returned if the autocorrelation function never reaches this value.

coherence_time : float

time when the autocorrelation function falls below autocorr_scale

autocorr_time : float

position of first positive peak

autocorr_val : float

value of first positive peak

YSOVAR.lightcurves.discrete_struc_func(t, val, order=2, scale=0.1)

discrete structure function

Parameters:

t : np.ndarray

times of time series

val : np.ndarray

values of time series

order : float

the exponent of the structure function

scale : float

In order to accept irregular time series, the calculated autocorrelation needs to be binned in time. scale sets the width of those bins.

Returns:

timebins : np.ndarray

time bins corresponding to the values in dsf

dsf : np.ndarray

binned and averaged discrete structure function

YSOVAR.lightcurves.fit_poly(x, y, yerr, degree=2)

Fit a polynom to a dataset

..note::
For numerical stability the x values will be shifted, such that x[0] = 0!

Thus, the parameters describe a fit to this shifted dataset!

Parameters:

x : np.ndarray

array of independend variable

y : np.ndarray

array of dependend variable

yerr: np.ndarray :

uncertainty of y values

degree : integer

degree of polynomial

Returns:

res_var : float

residual of the fit

shift : float

shift applied to x value for numerical stability.

beta : list

fit parameters

YSOVAR.lightcurves.gauss_kernel(scale=1)

return a Gauss kernel

Parameters:

scale : float

width (sigma) of the Gauss function

Returns:

kernel : function

kernel(x, loc), where loc is the center of the Gauss and x are the bin boundaries.

YSOVAR.lightcurves.normalize(data)

normalize data to mean = 1 and stddev = 1

Parameters:

data : np.array

input data

Returns:

data : np.array

normalized set of data

YSOVAR.lightcurves.plot_all_polys(x, y, yerr, title='')

plot polynomial fit of degree 1-6 for a dataset

Parameters:

x : np.ndarray

array of independend variable

y : np.ndarray

array of dependend variable

yerr : np.ndarray

uncertainty of y values

title : string

title of plot

Returns:

fig : matplotlib.figure instance

YSOVAR.lightcurves.slotting(xbins, x, y, kernel=None, normalize=True)

Add up all the y values in each x bin

xbins defines a (possible non-uniform) bin grid. For each bin, find all (x,y) pairs that belong in the x bin and add up all the y values in that bin. Optionally, the x values can be convolved with a kernel before, so that each y can contribute to more than one bin.

Parameters:

xbins : np.ndarray

edges of the x bins. There are len(xbins)-1 bins.

x, y : np.ndarry

x and y value to be binned

kernel : function

Kernel input is binedges, kernel output bin values: Thus, len(kernelout) must be len(kernelin)-1! The kernal output should be normalized to 1.

normalize : bool

If false, get the usual correlation function. For a regularly sampled time series, this is the same as zero-padding on the edges. For normalize = true divide by the number of entries in a time bin. This avoids zero-padding, but leads to an irregular “noise” distribution over the bins.

Returns:

out : np.ndarray

resulting array of added y values

n : np.ndarray

number of entries in wach bin. If kernel is used, this can be non-integer.