830 resultados para Explicit Integration
Resumo:
This work studies the organization of less-than-truckload trucking from a contractual point of view. We show that the huge number of owner-operators working in the industry hides a much less fragmented reality. Most of those owner-operators are quasi-integrated in higher organizational structures. This hybrid form is generally more efficient than vertical integration because, in the Spanish institutional environment, it lessens serious moral hazard problems, related mainly to the use of the vehicles, and makes it possible to reach economies of scale and density. Empirical evidence suggests that what leads organizations to vertically integrate is not the presence of such economies but hold-up problems, related to the existence of specific assets. Finally, an international comparison hints that institutional constraints are able to explain differences in the evolution of vertical integration across countries.
Resumo:
Highly competitive environments are leading companies to implement SupplyChain Management (SCM) to improve performance and gain a competitiveadvantage. SCM involves integration, co-ordination and collaborationacross organisations and throughout the supply chain. It means that SCMrequires internal (intraorganisational) and external (interorganisational)integration. This paper examines the Logistics-Production and Logistics-Marketing interfaces and their relation with the external integrationprocess. The study also investigates the causal impact of these internaland external relationships on the company s logistical service performance.To analyse this, an empirical study was conducted in the Spanish Fast MovingConsumer Goods (FMCG) sector.
Resumo:
This paper examines changes in the organization of the Spanish cotton industry from 1720 to 1860 in its core region of Catalonia. As the Spanish cotton industry adopted the most modern technology and experienced the transition to the factory system, cotton spinning and weaving mills became increasingly vertically integrated. Asset specificity more than other factors explained this tendency towards vertical integration. The probability for a firm of being vertically integrated was higher among firms located in districts with high concentration ratios and rose with size and the use of modern machinery. Simultaneously, subcontracting predominated in other phases of production and distribution where transaction costs appears to be less important.
Resumo:
The question of where retroviral DNA becomes integrated in chromosomes is important for understanding (i) the mechanisms of viral growth, (ii) devising new anti-retroviral therapy, (iii) understanding how genomes evolve, and (iv) developing safer methods for gene therapy. With the completion of genome sequences for many organisms, it has become possible to study integration targeting by cloning and sequencing large numbers of host-virus DNA junctions, then mapping the host DNA segments back onto the genomic sequence. This allows statistical analysis of the distribution of integration sites relative to the myriad types of genomic features that are also being mapped onto the sequence scaffold. Here we present methods for recovering and analyzing integration site sequences.
Resumo:
Understanding the mechanism through which financial globalization affect economic performance is crucial for evaluating the costs and benefits of opening financial markets. This paper is a first attempt at disentangling the effects of financial integration on the two main determinants of economic performance: productivity (TFP)and investments. I provide empirical evidence from a sample of 93 countries observed between 1975 and 1999. The results suggest that financial integration has a positive direct effect on productivity, while it spurs capital accumulation only with some delay and indirectly, since capital follows the rise in productivity. I control for indirect effects of financial globalization through banking crises. Such episodes depress both investments and TFP, though they are triggered by financial integration only to a minor extent. The paper also provides a discussion of a simple model on the effects of financial integration, and shows additional empirical evidence supporting it.
Resumo:
We study the link between corruption and economic integration. We show that if an economic union establishes a common regulation for public procurement, the country more prone to corruption benefits more from integration. However, if the propensities to corruption are too distinct, the less corrupt country will not be willing to join the union. This difference in corruption propensities can be offset by a difference in efficiency. We also show that corruption is lower if integration occurs. A panel data analysis for the European Union confirms that more corrupt countries are more favorable towards integration but less acceptable as potential new members.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.
Resumo:
This paper analyses the integration process that firms follow toimplement Supply Chain Management (SCM). This study has beeninspired in the integration model proposed by Stevens (1989). Hesuggests that companies internally integrate first and then extendintegration to other supply chain members, such as customers andsuppliers.To analyse the integration process a survey was conducted amongSpanish food manufacturers. The results show that there are companiesin three different integration stages. In stage I, companies are notintegrated. In stage II, companies have a medium-high level of internalintegration in the Logistics-Production interface, a low level ofinternal integration in the Logistics-Marketing interface, and a mediumlevel of external integration. And, in stage III, companies have highlevels of integration in both internal interfaces and in some of theirsupply chain relationships.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
In conducting genome-wide association studies (GWAS), analytical approaches leveraging biological information may further understanding of the pathophysiology of clinical traits. To discover novel associations with estimated glomerular filtration rate (eGFR), a measure of kidney function, we developed a strategy for integrating prior biological knowledge into the existing GWAS data for eGFR from the CKDGen Consortium. Our strategy focuses on single nucleotide polymorphism (SNPs) in genes that are connected by functional evidence, determined by literature mining and gene ontology (GO) hierarchies, to genes near previously validated eGFR associations. It then requires association thresholds consistent with multiple testing, and finally evaluates novel candidates by independent replication. Among the samples of European ancestry, we identified a genome-wide significant SNP in FBXL20 (P = 5.6 × 10(-9)) in meta-analysis of all available data, and additional SNPs at the INHBC, LRP2, PLEKHA1, SLC3A2 and SLC7A6 genes meeting multiple-testing corrected significance for replication and overall P-values of 4.5 × 10(-4)-2.2 × 10(-7). Neither the novel PLEKHA1 nor FBXL20 associations, both further supported by association with eGFR among African Americans and with transcript abundance, would have been implicated by eGFR candidate gene approaches. LRP2, encoding the megalin receptor, was identified through connection with the previously known eGFR gene DAB2 and extends understanding of the megalin system in kidney function. These findings highlight integration of existing genome-wide association data with independent biological knowledge to uncover novel candidate eGFR associations, including candidates lacking known connections to kidney-specific pathways. The strategy may also be applicable to other clinical phenotypes, although more testing will be needed to assess its potential for discovery in general.
Resumo:
Although functional neuroimaging studies have supported the distinction between explicit and implicit forms of memory, few have matched explicit and implicit tests closely, and most of these tested perceptual rather than conceptual implicit memory. We compared event-related fMRI responses during an intentional test, in which a group of participants used a cue word to recall its associate from a prior study phase, with those in an incidental test, in which a different group of participants used the same cue to produce the first associate that came to mind. Both semantic relative to phonemic processing at study, and emotional relative to neutral word pairs, increased target completions in the intentional test, but not in the incidental test, suggesting that behavioral performance in the incidental test was not contaminated by voluntary explicit retrieval. We isolated the neural correlates of successful retrieval by contrasting fMRI responses to studied versus unstudied cues for which the equivalent "target" associate was produced. By comparing the difference in this repetition-related contrast across the intentional and incidental tests, we could identify the correlates of voluntary explicit retrieval. This contrast revealed increased bilateral hippocampal responses in the intentional test, but decreased hippocampal responses in the incidental test. A similar pattern in the bilateral amygdale was further modulated by the emotionality of the word pairs, although surprisingly only in the incidental test. Parietal regions, however, showed increased repetition-related responses in both tests. These results suggest that the neural correlates of successful voluntary explicit memory differ in directionality, even if not in location, from the neural correlates of successful involuntary implicit (or explicit) memory, even when the incidental test taps conceptual processes.