24 resultados para [JEL:C79] Mathematical and Quantitative Methods - Game Theory and Bargaining Theory - Other
Resumo:
Aims: Previous data suggest heterogeneity in laminar distribution of the pathology in the molecular disorder frontotemporal lobar degeneration (FTLD) with transactive response (TAR) DNA-binding protein of 43kDa (TDP-43) proteinopathy (FTLD-TDP). To study this heterogeneity, we quantified the changes in density across the cortical laminae of neuronal cytoplasmic inclusions, glial inclusions, neuronal intranuclear inclusions, dystrophic neurites, surviving neurones, abnormally enlarged neurones, and vacuoles in regions of the frontal and temporal lobe. Methods: Changes in density of histological features across cortical gyri were studied in 10 sporadic cases of FTLD-TDP using quantitative methods and polynomial curve fitting. Results: Our data suggest that laminar neuropathology in sporadic FTLD-TDP is highly variable. Most commonly, neuronal cytoplasmic inclusions, dystrophic neurites and vacuolation were abundant in the upper laminae and glial inclusions, neuronal intranuclear inclusions, abnormally enlarged neurones, and glial cell nuclei in the lower laminae. TDP-43-immunoreactive inclusions affected more of the cortical profile in longer duration cases; their distribution varied with disease subtype, but was unrelated to Braak tangle score. Different TDP-43-immunoreactive inclusions were not spatially correlated. Conclusions: Laminar distribution of pathological features in 10 sporadic cases of FTLD-TDP is heterogeneous and may be accounted for, in part, by disease subtype and disease duration. In addition, the feedforward and feedback cortico-cortical connections may be compromised in FTLD-TDP. © 2012 The Authors. Neuropathology and Applied Neurobiology © 2012 British Neuropathological Society.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
In the global economy, innovation is one of the most important competitive assets for companies willing to compete in international markets. As competition moves from standardised products to customised ones, depending on each specific market needs, economies of scale are not anymore the only winning strategy. Innovation requires firms to establish processes to acquire and absorb new knowledge, leading to the recent theory of Open Innovation. Knowledge sharing and acquisition happens when firms are embedded in networks with other firms, university, institutions and many other economic actors. Several typologies of innovation and firm networks have been identified, with various geographical spans. One of the first being modelled was the Industrial Cluster (or in Italian Distretto Industriale) which was for long considered the benchmark for innovation and economic development. Other kind of networks have been modelled since the late 1970s; Regional Innovation Systems represent one of the latest and more diffuse model of innovation networks, specifically introduced to combine local networks and the global economy. This model was qualitatively exploited since its introduction, but, together with National Innovation Systems, is among the most inspiring for policy makers and is often cited by them, not always properly. The aim of this research is to setup an econometric model describing Regional Innovation Systems, becoming one the first attempts to test and enhance this theory with a quantitative approach. A dataset of 104 secondary and primary data from European regions was built in order to run a multiple linear regression, testing if Regional Innovation Systems are really correlated to regional innovation and regional innovation in cooperation with foreign partners. Furthermore, an exploratory multiple linear regression was performed to verify which variables, among those describing a Regional Innovation Systems, are the most significant for innovating, alone or with foreign partners. Furthermore, the effectiveness of present innovation policies has been tested based on the findings of the econometric model. The developed model confirmed the role of Regional Innovation Systems for creating innovation even in cooperation with international partners: this represents one of the firsts quantitative confirmation of a theory previously based on qualitative models only. Furthermore the results of this model confirmed a minor influence of National Innovation Systems: comparing the analysis of existing innovation policies, both at regional and national level, to our findings, emerged the need for potential a pivotal change in the direction currently followed by policy makers. Last, while confirming the role of the presence a learning environment in a region and the catalyst role of regional administration, this research offers a potential new perspective for the whole private sector in creating a Regional Innovation System.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Resumo:
The postgenomic era, as manifest, inter alia, by proteomics, offers unparalleled opportunities for the efficient discovery of safe, efficacious, and novel subunit vaccines targeting a tranche of modern major diseases. A negative corollary of this opportunity is the risk of becoming overwhelmed by this embarrassment of riches. Informatics techniques, working to address issues of both data management and through prediction to shortcut the experimental process, can be of enormous benefit in leveraging the proteomic revolution.In this disquisition, we evaluate proteomic approaches to the discovery of subunit vaccines, focussing on viral, bacterial, fungal, and parasite systems. We also adumbrate the impact that proteomic analysis of host-pathogen interactions can have. Finally, we review relevant methods to the prediction of immunome, with special emphasis on quantitative methods, and the subcellular localization of proteins within bacteria.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
This study draws upon effectuation and causation as examples of planning-based and flexible decision-making logics, and investigates dynamics in the use of both logics. The study applies a longitudinal process research approach to investigate strategic decision-making in new venture creation over time. Combining qualitative and quantitative methods, we analyze 385 decision events across nine technology-based ventures. Our observations suggest a hybrid perspective on strategic decision-making, demonstrating how effectuation and causation logics are combined, and how entrepreneurs’ emphasis on these logics shifts and re-shifts over time. We induce a dynamic model which extends the literature on strategic decision-making in venture creation.
Resumo:
Ageing is a natural phenomenon of the human lifecycle, yet it is still not understood what causes the deterioration of the human body near the end of the lifespan. One popular theory is the Free Radical Theory of Ageing, which proposes that oxidative damage to biomolecules causes ageing of tissues. The ageing population is affected by many chronic diseases. This study focused on sarcopenia (muscle loss in ageing) and obesity as two models for comparison of oxidative damage in muscle proteins in mice. The aim of the study was to develop advanced mass spectrometry methods to detect specific oxidative modifications to mouse muscle proteins, including oxidation, nitration, chlorination, and carbonyl group formation, but western blotting was also used to provide complementary information on the oxidative state of proteins from aged and obese muscle. Mass spectrometry proved to be a powerful tool, enabling identification of the types of modifications present, the sites at which they were present and percentage of the peptide populations that were modified. Targeted and semi-targeted mass spectrometry methods were optimised for the identification and quantitation of the oxidised residues in muscle proteins. The development of the quantitative methods enabled comparisons of mass spectrometry instruments. Both the Time of Flight and QTRAP systems showed advantages of using the different mass analysers to quantify oxidative modifications. Several oxidised residues were characterised and quantified in both the obese and sarcopenic models, and higher levels of oxidation were found compared to their control counterparts. Residues found to be oxidised were oxidation of proline, tyrosine and tryptophan, dioxidation of methionine, allysine and nitration of tyrosine. However quantification was performed on methionine dioxidation and cysteine trioxidation containing residues in SERCA. The combination of measuring residue susceptibility and functional studies could contribute to understanding the overall role of oxidation in ageing and obesity.
Resumo:
By mixing concepts from both game theoretic analysis and real options theory, an investment decision in a competitive market can be seen as a ‘‘game’’ between firms, as firms implicitly take into account other firms’ reactions to their own investment actions. We review two decades of real option game models, suggesting which critical problems have been ‘‘solved’’ by considering game theory, and which significant problems have not been yet adequately addressed. We provide some insights on the plausible empirical applications, or shortfalls in applications to date, and suggest some promising avenues for future research.