918 resultados para Kaski, Antti: The security complex: a theoretical analysis and the Baltic case
Resumo:
We present a group theoretical analysis of several classes of organic superconductor. We predict that highly frustrated organic superconductors, such as K-(ET)(2)Cu-2(CN)(3) (where ET is BEDT-TTF, bis(ethylenedithio) tetrathiafulvalene) and beta'-[Pd(dmit)(2)](2)X, undergo two superconducting phase transitions, the first from the normal state to a d-wave superconductor and the second to a d + id state. We show that the monoclinic distortion of K-(ET)(2)Cu(NCS)(2) means that the symmetry of its superconducting order parameter is different from that of orthorhombic-K-(ET)(2)Cu[N(CN)(2)] Br. We propose that beta'' and theta phase organic superconductors have d(xy) + s order parameters.
Resumo:
Over the past decade, several experienced Operational Researchers have advanced the view that the theoretical aspects of model building have raced ahead of the ability of people to use them. Consequently, the impact of Operational Research on commercial organisations and the public sector is limited, and many systems fail to achieve their anticipated benefits in full. The primary objective of this study is to examine a complex interactive Stock Control system, and identify the reasons for the differences between the theoretical expectations and the operational performance. The methodology used is to hypothesise all the possible factors which could cause a divergence between theory and practice, and to evaluate numerically the effect each of these factors has on two main control indices - Service Level and Average Stock Value. Both analytical and empirical methods are used, and simulation is employed extensively. The factors are divided into two main categories for analysis - theoretical imperfections in the model, and the usage of the system by Buyers. No evidence could be found in the literature of any previous attempts to place the differences between theory and practice in a system in quantitative perspective nor, more specifically, to study the effects of Buyer/computer interaction in a Stock Control system. The study reveals that, in general, the human factors influencing performance are of a much higher order of magnitude than the theoretical factors, thus providing objective evidence to support the original premise. The most important finding is that, by judicious intervention into an automatic stock control algorithm, it is possible for Buyers to produce results which not only attain but surpass the algorithmic predictions. However, the complexity and behavioural recalcitrance of these systems are such that an innately numerate, enquiring type of Buyer needs to be inducted to realise the performance potential of the overall man/computer system.
Resumo:
This thesis describes the development of a simple and accurate method for estimating the quantity and composition of household waste arisings. The method is based on the fundamental tenet that waste arisings can be predicted from information on the demographic and socio-economic characteristics of households, thus reducing the need for the direct measurement of waste arisings to that necessary for the calibration of a prediction model. The aim of the research is twofold: firstly to investigate the generation of waste arisings at the household level, and secondly to devise a method for supplying information on waste arisings to meet the needs of waste collection and disposal authorities, policy makers at both national and European level and the manufacturers of plant and equipment for waste sorting and treatment. The research was carried out in three phases: theoretical, empirical and analytical. In the theoretical phase specific testable hypotheses were formulated concerning the process of waste generation at the household level. The empirical phase of the research involved an initial questionnaire survey of 1277 households to obtain data on their socio-economic characteristics, and the subsequent sorting of waste arisings from each of the households surveyed. The analytical phase was divided between (a) the testing of the research hypotheses by matching each household's waste against its demographic/socioeconomic characteristics (b) the development of statistical models capable of predicting the waste arisings from an individual household and (c) the development of a practical method for obtaining area-based estimates of waste arisings using readily available data from the national census. The latter method was found to represent a substantial improvement over conventional methods of waste estimation in terms of both accuracy and spatial flexibility. The research therefore represents a substantial contribution both to scientific knowledge of the process of household waste generation, and to the practical management of waste arisings.
Resumo:
Over the past forty years the corporate identity literature has developed to a point of maturity where it currently contains many definitions and models of the corporate identity construct at the organisational level. The literature has evolved by developing models of corporate identity or in considering corporate identity in relation to new and developing themes, e.g. corporate social responsibility. It has evolved into a multidisciplinary domain recently incorporating constructs from other literature to further its development. However, the literature has a number of limitations. It remains that an overarching and universally accepted definition of corporate identity is elusive, potentially leaving the construct with a lack of clear definition. Only a few corporate identity definitions and models, at the corporate level, have been empirically tested. The corporate identity construct is overwhelmingly defined and theoretically constructed at the corporate level, leaving the literature without a detailed understanding of its influence at an individual stakeholder level. Front-line service employees (FLEs), form a component in a number of corporate identity models developed at the organisational level. FLEs deliver the services of an organisation to its customers, as well as represent the organisation by communicating and transporting its core defining characteristics to customers through continual customer contact and interaction. This person-to-person contact between an FLE and the customer is termed a service encounter, where service encounters influence a customer’s perception of both the service delivered and the associated level of service quality. Therefore this study for the first time defines, theoretically models and empirically tests corporate identity at the individual FLE level, termed FLE corporate identity. The study uses the services marketing literature to characterise an FLE’s operating environment, arriving at five potential dimensions to the FLE corporate identity construct. These are scrutinised against existing corporate identity definitions and models to arrive at a definition for the construct. In reviewing the corporate identity, services marketing, branding and organisational psychology literature, a theoretical model is developed for FLE corporate identity, which is empirically and quantitatively tested, with FLEs in seven stores of a major national retailer. Following rigorous construct reliability and validity testing, the 601 usable responses are used to estimate a confirmatory factor analysis and structural equation model for the study. The results for the individual hypotheses and the structural model are very encouraging, as they fit the data well and support a definition of FLE corporate identity. This study makes contributions to the branding, services marketing and organisational psychology literature, but its principal contribution is to extend the corporate identity literature into a new area of discourse and research, that of FLE corporate identity
Resumo:
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.
Resumo:
The accurate in silico identification of T-cell epitopes is a critical step in the development of peptide-based vaccines, reagents, and diagnostics. It has a direct impact on the success of subsequent experimental work. Epitopes arise as a consequence of complex proteolytic processing within the cell. Prior to being recognized by T cells, an epitope is presented on the cell surface as a complex with a major histocompatibility complex (MHC) protein. A prerequisite therefore for T-cell recognition is that an epitope is also a good MHC binder. Thus, T-cell epitope prediction overlaps strongly with the prediction of MHC binding. In the present study, we compare discriminant analysis and multiple linear regression as algorithmic engines for the definition of quantitative matrices for binding affinity prediction. We apply these methods to peptides which bind the well-studied human MHC allele HLA-A*0201. A matrix which results from combining results of the two methods proved powerfully predictive under cross-validation. The new matrix was also tested on an external set of 160 binders to HLA-A*0201; it was able to recognize 135 (84%) of them.
Resumo:
In this paper a methodology for evaluation of information security of objects under attacks, processed by methods of compression, is represented. Two basic parameters for evaluation of information security of objects – TIME and SIZE – are chosen and the characteristics, which reflect on their evaluation, are analyzed and estimated. A co-efficient of information security of object is proposed as a mean of the coefficients of the parameter TIME and SIZE. From the simulation experiments which were carried out methods with the highest co-efficient of information security had been determined. Assessments and conclusions for future investigations are proposed.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient's extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.^
Resumo:
Understanding pathways of neurological disorders requires extensive research on both functional and structural characteristics of the brain. This dissertation introduced two interrelated research endeavors, describing (1) a novel integrated approach for constructing functional connectivity networks (FCNs) of brain using non-invasive scalp EEG recordings; and (2) a decision aid for estimating intracranial volume (ICV). The approach in (1) was developed to study the alterations of networks in patients with pediatric epilepsy. Results demonstrated the existence of statistically significant (p
Resumo:
Acknowledgments We thank Sally Rowland for helpful comments on the manuscript. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Resumo:
The phase difference principle is widely applied nowadays to sonar systems used for sea floor bathymetry, The apparent angle of a target point is obtained from the phase difference measured between two close receiving arrays. Here we study the influence of the phase difference estimation errors caused by the physical structure of the backscattered signals. It is shown that, under certain current conditions, beyond the commonly considered effects of additive external noise and baseline decorrelation, the processing may be affected by the shifting footprint effect: this is due to the fact that the two interferometer receivers get simultaneous echo contributions coming from slightly shifted seabed parts, which results in a degradation of the signal coherence and, hence, of the phase difference measurement. This geometrical effect is described analytically and checked with numerical simulations, both for square- and sine-shaped signal envelopes. Its relative influence depends on the geometrical configuration and receiver spacing; it may be prevalent in practical cases associated with bathymetric sonars. The cases of square and smooth signal envelopes are both considered. The measurements close to nadir, which are known to be especially difficult with interferometry systems, are addressed in particular.
Resumo:
New psychoactive substances (NPSs) have appeared on the recreational drug market at an unprecedented rate in recent years. Many are not new drugs but failed products of the pharmaceutical industry. The speed and variety of drugs entering the market poses a new complex challenge for the forensic toxicology community. The detection of these substances in biological matrices can be difficult as the exact compounds of interest may not be known. Many NPS are sold under the same brand name and therefore users themselves may not know what substances they have ingested. The majority of analytical methods for the detection of NPSs tend to focus on a specific class of compounds rather than a wide variety. In response to this, a robust and sensitive method was developed for the analysis of various NPS by solid phase extraction (SPE) with gas chromatography mass spectrometry (GCMS). Sample preparation and derivatisation were optimised testing a range of SPE cartridges and derivatising agents, as well as derivatisation incubation time and temperature. The final gas chromatography mass spectrometry method was validated in accordance with SWGTOX 2013 guidelines over a wide concentration range for both blood and urine for 23 and 25 analytes respectively. This included the validation of 8 NBOMe compounds in blood and 10 NBOMe compounds in urine. This GC-MS method was then applied to 8 authentic samples with concentrations compared to those originally identified by NMS laboratories. The rapid influx of NPSs has resulted in the re-analysis of samples and thus, the stability of these substances is crucial information. The stability of mephedrone was investigated, examining the effect that storage temperatures and preservatives had on analyte stability daily for 1 week and then weekly for 10 weeks. Several laboratories identified NPSs use through the cross-reactivity of these substances with existing screening protocols such as ELISA. The application of Immunalysis ketamine, methamphetamine and amphetamine ELISA kits for the detection of NPS was evaluated. The aim of this work was to determine if any cross-reactivity from NPS substances was observed, and to determine whether these existing kits would identify NPS use within biological samples. The cross- reactivity of methoxetamine, 3-MeO-PCE and 3-MeO-PCP for different commercially point of care test (POCT) was also assessed for urine. One of the newest groups of compounds to appear on the NPS market is the NBOMe series. These drugs pose a serious threat to public health due to their high potency, with fatalities already reported in the literature. These compounds are falsely marketed as LSD which increases the chance of adverse effects due to the potency differences between these 2 substances. A liquid chromatography tandem mass spectrometry (LC-MS/MS) method was validated in accordance with SWGTOX 2013 guidelines for the detection for 25B, 25C and 25I-NBOMe in urine and hair. Long-Evans rats were administered 25B-, 25C- and 25I-NBOMe at doses ranging from 30-300 µg/kg over a period of 10 days. Tail flick tests were then carried out on the rats in order to determine whether any analgesic effects were observed as a result of dosing. Rats were also shaved prior to their first dose and reshaved after the 10-day period. Hair was separated by colour (black and white) and analysed using the validated LC-MS/MS method, assessing the impact hair colour has on the incorporation of these drugs. Urine was collected from the rats, analysed using the validated LC-MS/MS method and screened for potential metabolites using both LC-MS/MS and quadrupole time of flight (QToF) instrumentation.