866 resultados para BIG-BANG NUCLEOSYNTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors identify the firm-specific core competencies that Panera Bread has relied on to achieve a competitive advantage in its business domain. The study illustrates how the company scans the dynamically changing environments and tailors their products and services in accordance with these changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to determine the degree to which the Big-Five personality taxonomy, as represented by the Minnesota Multiphasic Personality Inventory (MMPI), California Psychological Inventory (CPI), and Inwald Personality Inventory (IPI) scales, predicted a variety of police officer job performance criteria. Data were collected archivally for 270 sworn police officers from a large Southeastern municipality. Predictive data consisted of scores on the MMPI, CPI, and IPI scales as grouped in terms of the Big-Five factors. The overall score on the Wonderlic was included in order to assess criterion variance accounted for by cognitive ability. Additionally, a psychologist's overall rating of predicted job fit was utilized to assess the variance accounted for by a psychological interview. Criterion data consisted of supervisory ratings of overall job performance, State Examination scores, police academy grades, and termination. Based on the literature, it was hypothesized that officers who are higher on Extroversion, Conscientiousness, Agreeableness, Openness to Experience, and lower on Neuroticism, otherwise known as the Big-Five factors, would outperform their peers across a variety of job performance criteria. Additionally, it was hypothesized that police officers who are higher in cognitive ability and masculinity, and lower in mania would also outperform their counterparts. Results indicated that many of the Big-Five factors, namely, Neuroticism, Conscientiousness, Agreeableness, and Openness to Experience, were predictive of several of the job performance criteria. Such findings imply that the Big-Five is a useful predictor of police officer job performance. Study limitations and implications for future research are discussed. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pine rocklands of South Florida are characterized by an herbaceous flora with many narrowly endemic taxa, a diverse shrub layer containing several palms and numerous tropical hardwoods, and an overstory of south Florida slash pine (Pinus elliottii var. densa). Fire has been considered as an important environmental factor for these ecosystems, since in the absence of fire these pine forests are replaced by dense hardwood communities, resulting in loss of the characteristic pineland herb flora. Hence, in the Florida Keys pine forests, prescribed fire has been used since the creation of the National Key Deer Refuge. However, such prescribed burns were conducted in the Refuge mainly for fuel reduction, without much consideration of ecological factors. The USGS and Florida International University conducted a research study for four years, from 1998 to 2001, the objective of which was to document the response of pine rockland vegetation to a range of fire management options and to provide Fish and Wildlife Service and other land managers with information useful in deciding when and where to burn to perpetuate these unique pine forests. This study is described in detail in Snyder et al. (2005).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within Big Cypress National Preserve (BICY), oak-dominated forests and woodlands as well as tropical and temperate hardwood hammocks are integral components of the landscape and are biodiversity hotpots for both flora and fauna. These broadleaved forest communities serve as refugia for many of the Preserve’s wildlife species during prolonged flooding and fires. However, both prolonged flooding and severe fires, which are important and necessary disturbance vectors within this landscape, can have deleterious effects on these forested communities. This is particularly true in the case of fires, which under extreme conditions associated with drought and elevated fuel loads, can burn through these forested communities consuming litter and understory vegetation and top killing most, if not all, of the trees present.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ongoing debates within the professional and academic communities have raised a number of questions specific to the international audit market. This dissertation consists of three related essays that address such issues. First, I examine whether the propensity to switch between auditors of different sizes (i.e., Big 4 versus non-Big 4) changes as adoption of International Financial Reporting Standards (IFRS) becomes a more common phenomenon, arguing that smaller auditors have an opportunity to invest in necessary skills and training needed to enter this market. Findings suggest that clients are relatively less (more) likely to switch to (away from) a Big 4 auditor if the client's adoption of IFRS occurs in more recent years. ^ In the second essay, I draw on these inferences and test whether the change in audit fees in the year of IFRS adoption changes over time. As the market becomes less concentrated, larger auditors becomes less able to demand a premium for their services. Consistent with my arguments, results suggest that the change in audit service fees declines over time, although this effect seems concentrated among the Big 4. I also find that this effect is partially attributable to a differential effect of the auditors' experience in pricing audit services related to IFRS based on the period in which adoption occurs. The results of these two essays offer important implications to policy debates on the costs and benefits of IFRS adoption. ^ In the third essay, I differentiate Big 4 auditors into three classifications—Parent firms, Brand Name affiliates, and Local affiliates—and test for differences in audit fee premiums (relative to non-Big 4 auditors) and audit quality. Results suggest that there is significant heterogeneity between the three classifications based on both of these characteristics, which is an important consideration for future research. Overall, this dissertation provides additional insights into a variety of aspects of the global audit market.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For years, researchers and human resources specialists have been searching for predictors of performance as well as for relevant performance dimensions (Barrick & Mount, 1991; Borman & Motowidlo, 1993; Campbell, 1990; Viswesvaran et al., 1996). In 1993, Borman and Motowidlo provided a framework by which traditional predictors such as cognitive ability and the Big Five personality factors predicted two different facets of performance: 1) task performance and 2) contextual performance. A meta-analysis was conducted to assess the validity of this model as well as that of other modified models. The relationships between predictors such as cognitive ability and personality variables and the two outcome variables were assessed. It was determined that even though the two facets of performance may be conceptually different, empirically they overlapped substantially (p= .75). Finally, results show that there is some evidence for cognitive ability as a predictor of both task and contextual performance and conscientiousness as a predictor of both task and contextual performance. The possible mediation of predictor-- criterion relationships was also assessed. The relationship between cognitive ability and contextual performance vanished when task performance was controlled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines five big band arrangements written during a period of two semesters from 1998-1999. I will provide an overview and performance considerations for each arrangement. Each arrangement uses common conventions such as unison lines, octave doubling, four and five part voicings, found in closed, semi-open, and open position. Approach techniques include diatonic, dominant, diminished, chromatic, and parallel. Choice was based primarily on two considerations: desired texture and the best voice leading options identified to provide each part with a swinging line and maintain melodic integrity. Other conventions applied include chord substitution, upper structure triads, and altered and diminished scales to provide harmonic contrast and color. Each arrangement supplied new challenges and the tunes selected provided the arranger with a diverse experience of styles. The inherent qualities of the melody and harmonic progression of each piece were the primary considerations for selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conservation of large felids is not only about collecting ecological information; it is also about understanding people’s values, beliefs, attitudes and behaviour. The overarching goal of this thesis is to assess the relationship between people and jaguars and pumas. Specifically by contributing to the understanding of public acceptance of big cats, as well as the forces (cognitive and social) that influence people’s acceptability. Self-administered questionnaires (n=326) were applied to rural residents outside two protected areas in the State of Sao Paulo: Intervales and PETAR state parks. Findings showed that the acceptability of killing big cats varied accordingly to attitudinal type (positive and negative). Additionally, acceptability of jaguars and pumas was influenced by existence values, attitudes and park credibility. Human dimensions research helped in understanding the relationship between people and the big cats, highlighting the need, for example, to improve the credibility of the parks in the communities and to decrease the fear of jaguars and pumas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi presenta uno studio della libreria grafica per web D3, sviluppata in javascript, e ne presenta una catalogazione dei grafici implementati e reperibili sul web. Lo scopo è quello di valutare la libreria e studiarne i pregi e difetti per capire se sia opportuno utilizzarla nell'ambito di un progetto Europeo. Per fare questo vengono studiati i metodi di classificazione dei grafici presenti in letteratura e viene esposto e descritto lo stato dell'arte del data visualization. Viene poi descritto il metodo di classificazione proposto dal team di progettazione e catalogata la galleria di grafici presente sul sito della libreria D3. Infine viene presentato e studiato in maniera formale un algoritmo per selezionare un grafico in base alle esigenze dell'utente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il lavoro presentato in questo elaborato tratterà lo sviluppo di un sistema di alerting che consenta di monitorare proattivamente una o più sorgenti dati aziendali, segnalando le eventuali condizioni di irregolarità rilevate; questo verrà incluso all'interno di sistemi già esistenti dedicati all'analisi dei dati e alla pianificazione, ovvero i cosiddetti Decision Support Systems. Un sistema di supporto alle decisioni è in grado di fornire chiare informazioni per tutta la gestione dell'impresa, misurandone le performance e fornendo proiezioni sugli andamenti futuri. Questi sistemi vengono catalogati all'interno del più ampio ambito della Business Intelligence, che sottintende l'insieme di metodologie in grado di trasformare i dati di business in informazioni utili al processo decisionale. L'intero lavoro di tesi è stato svolto durante un periodo di tirocinio svolto presso Iconsulting S.p.A., IT System Integrator bolognese specializzato principalmente nello sviluppo di progetti di Business Intelligence, Enterprise Data Warehouse e Corporate Performance Management. Il software che verrà illustrato in questo elaborato è stato realizzato per essere collocato all'interno di un contesto più ampio, per rispondere ai requisiti di un cliente multinazionale leader nel settore della telefonia mobile e fissa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At the moment, the phrases “big data” and “analytics” are often being used as if they were magic incantations that will solve all an organization’s problems at a stroke. The reality is that data on its own, even with the application of analytics, will not solve any problems. The resources that analytics and big data can consume represent a significant strategic risk if applied ineffectively. Any analysis of data needs to be guided, and to lead to action. So while analytics may lead to knowledge and intelligence (in the military sense of that term), it also needs the input of knowledge and intelligence (in the human sense of that term). And somebody then has to do something new or different as a result of the new insights, or it won’t have been done to any purpose. Using an analytics example concerning accounts payable in the public sector in Canada, this paper reviews thinking from the domains of analytics, risk management and knowledge management, to show some of the pitfalls, and to present a holistic picture of how knowledge management might help tackle the challenges of big data and analytics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constant technology advances have caused data explosion in recent years. Accord- ingly modern statistical and machine learning methods must be adapted to deal with complex and heterogeneous data types. This phenomenon is particularly true for an- alyzing biological data. For example DNA sequence data can be viewed as categorical variables with each nucleotide taking four different categories. The gene expression data, depending on the quantitative technology, could be continuous numbers or counts. With the advancement of high-throughput technology, the abundance of such data becomes unprecedentedly rich. Therefore efficient statistical approaches are crucial in this big data era.

Previous statistical methods for big data often aim to find low dimensional struc- tures in the observed data. For example in a factor analysis model a latent Gaussian distributed multivariate vector is assumed. With this assumption a factor model produces a low rank estimation of the covariance of the observed variables. Another example is the latent Dirichlet allocation model for documents. The mixture pro- portions of topics, represented by a Dirichlet distributed variable, is assumed. This dissertation proposes several novel extensions to the previous statistical methods that are developed to address challenges in big data. Those novel methods are applied in multiple real world applications including construction of condition specific gene co-expression networks, estimating shared topics among newsgroups, analysis of pro- moter sequences, analysis of political-economics risk data and estimating population structure from genotype data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.