975 resultados para Log-log Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the year 2001, the Commission on Dietetic Registration (CDR) will begin a new process of recertifying Registered Dietitians (RD) using a self-directed lifelong learning portfolio model. The model, entitled Professional Development 2001 (PD 2001), is designed to increase competency through targeted learning. This portfolio consists of five steps: reflection, learning needs assessment, formulation of a learning plan, maintenance of a learning log, and evaluation of the learning plan. By targeting learning, PD 2001 is predicted to foster more up-to-date practitioners than the current method that requires only a quantity of continuing education hours. This is the first major change in the credentialing system since 1975. The success or failure of the new system will impact the future of approximately 60,000 practitioners. The purpose of this study was to determine the readiness of RDs to change to the new system. Since the model is dependent on setting goals and developing learning plans, this study examined the methods dietitians use to determine their five-year goals and direction in practice. It also determined RD's attitudes towards PD 2001 and identified some of the factors that influenced their beliefs. A dual methodological design using focus groups and questionnaires was utilized. Sixteen focus groups were held during state dietetic association meetings. Demographic data was collected on the 132 registered dietitians who participated in the focus groups using a self-administered questionnaire. The audiotaped sessions were transcribed into 643 pages of text and analyzed using Non-numerical Unstructured Data - Indexing Searching and Theorizing (NUD*IST version 4). Thirty-four of the 132 participants (26%) had formal five-year goals. Fifty-four participants (41%) performed annual self-assessments. In general, dietitians did not currently have professional goals nor conduct self-assessments and they claimed they did not have the skills or confidence to perform these tasks. Major barriers to successful implementation of PD 2001 are uncertainty, misinterpretation, and misinformation about the process and purpose, which in turn contribute to negative impressions. Renewed vigor to provide a positive, accurate message along with presenting goal-setting strategies will be necessary for better acceptance of this professional development process. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate knowledge of the time since death, or postmortem interval (PMI), has enormous legal, criminological, and psychological impact. In this study, an investigation was made to determine whether the relationship between the degradation of the human cardiac structure protein Cardiac Troponin T and PMI could be used as an indicator of time since death, thus providing a rapid, high resolution, sensitive, and automated methodology for the determination of PMI. ^ The use of Cardiac Troponin T (cTnT), a protein found in heart tissue, as a selective marker for cardiac muscle damage has shown great promise in the determination of PMI. An optimized conventional immunoassay method was developed to quantify intact and fragmented cTnT. A small sample of cardiac tissue, which is less affected than other tissues by external factors, was taken, homogenized, extracted with magnetic microparticles, separated by SDS-PAGE, and visualized with Western blot by probing with monoclonal antibody against cTnT. This step was followed by labeling and available scanners. This conventional immunoassay provides a proper detection and quantitation of cTnT protein in cardiac tissue as a complex matrix; however, this method does not provide the analyst with immediate results. Therefore, a competitive separation method using capillary electrophoresis with laser-induced fluorescence (CE-LIF) was developed to study the interaction between human cTnT protein and monoclonal anti-TroponinT antibody. ^ Analysis of the results revealed a linear relationship between the percent of degraded cTnT and the log of the PMI, indicating that intact cTnT could be detected in human heart tissue up to 10 days postmortem at room temperature and beyond two weeks at 4C. The data presented demonstrates that this technique can provide an extended time range during which PMI can be more accurately estimated as compared to currently used methods. The data demonstrates that this technique represents a major advance in time of death determination through a fast and reliable, semi-quantitative measurement of a biochemical marker from an organ protected from outside factors. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental and theoretical studies regarding noise processes in various kinds of AlGaAs/GaAs heterostructures with a quantum well are reported. The measurement processes, involving a Fast Fourier Transform and analog wave analyzer in the frequency range from 10 Hz to 1 MHz, a computerized data storage and processing system, and cryostat in the temperature range from 78 K to 300 K are described in detail. The current noise spectra are obtained with the “three-point method”, using a Quan-Tech and avalanche noise source for calibration. ^ The properties of both GaAs and AlGaAs materials and field effect transistors, based on the two-dimensional electron gas in the interface quantum well, are discussed. Extensive measurements are performed in three types of heterostructures, viz., Hall structures with a large spacer layer, modulation-doped non-gated FETs, and more standard gated FETs; all structures are grown by MBE techniques. ^ The Hall structures show Lorentzian generation-recombination noise spectra with near temperature independent relaxation times. This noise is attributed to g-r processes in the 2D electron gas. For the TEGFET structures, we observe several Lorentzian g-r noise components which have strongly temperature dependent relaxation times. This noise is attributed to trapping processes in the doped AlGaAs layer. The trap level energies are determined from an Arrhenius plot of log (τT2) versus 1/T as well as from the plateau values. The theory to interpret these measurements and to extract the defect level data is reviewed and further developed. Good agreement with the data is found for all reported devices. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested.^ Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study was carried out on the main plots of a large grassland biodiversity experiment (the Jena Experiment). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. We tracked soil microbial basal respiration (BR; µlO2/g dry soil/h) and biomass carbon (Cmic; µgC/g dry soil) over a time period of 12 years (2003-2014) and examined the role of plant diversity and plant functional group composition for the spatial and temporal stability (calculated as mean/SD) of soil microbial properties (basal respiration and biomass) in bulk-soil. Our results highlight the importance of plant functional group composition for the spatial and temporal stability of soil microbial properties, and hence for microbially-driven ecosystem processes, such as decomposition and element cycling, in temperate semi-natural grassland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the reference drugs, generic and similar to the active ingredients acetylsalicylic acid, paracetamol, captopril, hydrochlorothiazide and mebendazole were purchased from local pharmacies and studied by thermogravimetry (TG) and Differential Scanning Calorimetry (DSC). Thermal decomposition was assessed to obtain from the Ozawa method the activation energy in inert atmosphere (nitrogen), using three different heating ratios (5, 10 and 20 o C min-1). The pharmaceutical formulation of the AAS reference was the one who presented different from the others (generic and similar) Thermogravimetric profile indicating likely interaction between the active ingredient and excipients. Was observed at the heating rate of the inverse temperature that no linearity of the data, ie, there was no correlation between the percentage of mass loss and the activation energy involved in the thermal decomposition of the pharmaceutical formulation of the AAS reference log graph. The analysis by differential scanning calorimetry was performed in nitrogen atmosphere with a heating rate of 10 ° C min-1. In the analysis of these same drugs, the data curves found on the melting point were, except for hydrochlorothiazide, are consistent with the literature. Hydrochlorothiazide presented a melting point well below that found in the literature, which may be justified due to the interaction of the active ingredient with the excipient lactose. In the study of purity, using the Van't Hoff equation, the reference drugs hydrochlorothiazide and mebendazole reference generic and showed similar impurity content below the limit established that this equation must be greater than 2.5 mol%

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lemmings construct nests of grass and moss under the snow during winter, and counting these nests in spring is 1 method of obtaining an index of winter density and habitat use. We counted winter nests after snow melt on fixed grids on 5 areas scattered across the Canadian Arctic and compared these nest counts to population density estimated by mark-recapture on the same areas in spring and during the previous autumn. Collared lemmings were a common species in most areas, some sites had an abundance of brown lemmings, and only 2 sites had tundra voles. Winter nest counts were correlated with lemming densities estimated in the following spring (r(s) = 0.80, P < 0.001), but less well correlated with densities the previous autumn (r(s) = 0.55, P < 0.001). Winter nest counts can be used to predict spring lemming densities with a log-log regression that explains 64% of the observed variation. Winter nest counts are best treated as an approximate index and should not be used when precise, quantitative lemming density estimates are required. Nest counts also can be used to provide general information about habitat-use in winter, predation rates by weasels, and the extent of winter breeding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lamination and burrowing patterns in 17 box cores were analyzed with the aid of X-ray photographs and thin sections. A standardized method of log plotting made statistical analysis of the data possible. Several 'structure types' were established, although it was realized that the boundaries are purely arbitrary divisions in what can sometimes be a continuous sequence. In the transition zone between marginal sand facies and fine-grained basin facies, muddy sediment is found which contains particularly well differentiated, alternating laminae. This zone is also characterized by layers rich in plant remains. The alternation of laminae shows a high degree of statistical scattering. Even though a small degree of cyclic periodicity could be defined, it was impossible to correlate individual layers from core to core across the bay. However, through a statistical handling of the plots, zones could be separated on the basis of the number of sand layers they contained. These more or minder sandy zones clarified the bottom reflections seen in the records of the echograph from the area. The manner of facies change across the bay, suggests that no strong bottom currents are effective in the Eckernförde Bay. The marked asymmetry between the north and south flanks of the profile can be attributed to the stronger action of waves on the more exposed areas. Grain size analyses were made from the more homogeneous units found in a core from the transition-facies zone. The results indicate that the most pronounced differences between layers appear in the silt range, and although the differences are slight, they are statistically significant. Layers rich in plant remains were wet-sieved in order to separate the plant detritus. This was than analyzed in a sediment settling balance and found to be hydrodynamically equivalent to a well-sorted, finegrained sand. A special, rhythmic cross-bedding type with dimensions in the millimeter range, has been named 'Crypto-cross-lamination' and is thought to represent rapid sedimentation in an area where only very weak bottom currents are present. It is found only in the deepest part of the basin. Relatively large sand grains, scattered within layers of clayey-silty matrix, seem to be transported by flotation. Thin section examination showed that the inner part of Eckernförder Bay carbonate grains (e. g. Foraminifera shells) were preserved throughout the cores, while in the outer part of the bay they were not present. Well defined tracks and burrows are relatively rare in all of the facies in comparision to the generally strongly developed deformation burrowing. The application of special measures for the deformation burrowing allowed to plot their intensity in profile for each core. A degree of regularity could be found in these burrowing intensity plots, with higher values appearing in the sandy facies, but with no clear differences between sand and silt layers in the transition facies. Small sections in the profiles of the deepest part of the bay show no bioturbation at all.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present revised magnetostratigraphic interpretations for Ocean Drilling Program Sites 1095, 1096, and 1101, cored in sediment drifts located off the Pacific margin of the Antarctic Peninsula. The revised interpretations incorporate a variety of observations and results obtained since the end of Leg 178, of which the most significant are new paleomagnetic measurements from U-channel samples, composite depth scales that allow stratigraphic correlation between multiple holes cored at a site, and revised biostratigraphic interpretations. The U-channel data, which include more than 102,000 paleomagnetic observations from more than 13,400 intervals along U-channel samples, are included as electronic files. The magnetostratigraphic records at all three sites are consistent with sedimentation being continuous over the intervals cored, although the data resolution does not preclude short hiatuses less than a few hundred thousand years in duration. The magnetostratigraphic records start at the termination of Subchron C4Ar.2n (9.580 Ma) at ~515 meters composite depth (mcd) for Site 1095, at the onset of Subchron C3n.2n (4.620 Ma) at ~489.68 mcd for Site 1096, and at the onset of Subchron C2An.1n (3.040 Ma) at 209.38 meters below seafloor for Site 1101. All three sites provide paleomagnetic records that extend upward through the Brunhes Chron.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is crucial to understand the role that labor market positions might play in creating gender differences in work–life balance. One theoretical approach to understanding this relationship is the spillover theory. The spillover theory argues that an individual’s life domains are integrated; meaning that well-being can be transmitted between life domains. Based on data collected in Hungary in 2014, this paper shows that work-to-family spillover does not affect both genders the same way. The effect of work on family life tends to be more negative for women than for men. Two explanations have been formulated in order to understand this gender inequality. According to the findings of the analysis, gender is conditionally independent of spillover if financial status and flexibility of work are also incorporated into the analysis. This means that the relative disadvantage for women in terms of spillover can be attributed to their lower financial status and their relatively low access to flexible jobs. In other words, the gender inequalities in work-to-family spillover are deeply affected by individual labor market positions. The observation of the labor market’s effect on work–life balance is especially important in Hungary since Hungary has one of the least flexible labor arrangements in Europe. A marginal log-linear model, which is a method for categorical multivariate analysis, has been applied in this analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research includes a review of the log management of the company Telia. The research has also included a comparison of the two log management sys- tems Splunk and ELK. The review of the company’s log management shows that log messages are being stored in files on a hard drive that can be accessed through the network. The log messages are system-specific. ELK is able to fetch log messages of different formats simultaneously, but this feature is not possible in Splunk where the process of uploading log messages has to be re- peated for log messages that have different formats. Both systems store log messages through a file system on a hard drive, where the systems are installed. In networks that involve multiple servers, ELK is distributing the log messages between the servers. Thus, the workload to perform searches and storing large amounts of data is reduced. Using Splunk in networks can also reduce the workload. This is done by using forwarders that send the log messages to one or multiple central servers which stores the messages. Searches of log messages in Splunk are performed by using a graphical interface. Searches in ELK is done by using a REST-API which can be used by external systems as well, to retrieve search results. Splunk also has a REST-API that can be used by external sys- tems to receive search results. The research revealed that ELK had a lower search time than Splunk. However, no method was found that could be used to measure the indexing time of ELK, which meant that no comparison could be made with respect to the indexing time for Splunk. For future work there should be an investigation whether there is any possibility to measure the indexing time of ELK. Another recommendation is to include more log management sys- tem in the research to improve the results that may be suitable candidates for the company Telia. An improvement suggestion as well, is to do performance tests in a network with multiple servers and thereby draw conclusions how the performance is in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08