975 resultados para DEPENDENT QUANTUM PROBLEMS
Resumo:
Depression in childhood or adolescence is associated with increased rates of depression in adulthood. Does this justify efforts to detect (and treat) those with symptoms of depression in early childhood or adolescence? The aim of this study was to determine how well symptoms of anxiety/depression (A-D) in early childhood and adolescence predict adult mental health. The study sample is taken from a population-based prospective birth cohort study. Of the 8556 mothers initially approached to participate 8458 agreed, of whom 7223 mothers gave birth to a live singleton baby. Children were screened using modified Child Behaviour Checklist (CBCL) scales for internalizing and total problems (T-P) at age 5 and the CBCL and Youth Self Report (YSR) A-D subscale and T-P scale at age 14. At age 21, a sub-sample of 2563 young adults in this cohort were administered the CIDI-Auto. Results indicated that screening at age 5 would detect few later cases of significant mental ill-health. Using a cut-point of 20% for internalizing at child age 5 years the CBCL had sensitivities of only 25% and 18% for major depression and anxiety disorders at 21 years, respectively. At age 14, the YSR generally performed a little better than the CBCL as a screening instrument, but neither performed at a satisfactory level. Of the children who were categorised as having YSR A-D at 14 years 30% and 37% met DSM-IV criteria for major depression and anxiety disorders, respectively, at age 21. Our findings challenge an existing movement encouraging the detection and treatment of those with symptoms of mental illness in early childhood.
Resumo:
In this paper, we present three counterfeiting attacks on the block-wise dependent fragile watermarking schemes. We consider vulnerabilities such as the exploitation of a weak correlation among block-wise dependent watermarks to modify valid watermarked %(medical or other digital) images, where they could still be verified as authentic, though they are actually not. Experimental results successfully demonstrate the practicability and consequences of the proposed attacks for some relevant schemes. The development of the proposed attack models can be used as a means to systematically examine the security levels of similar watermarking schemes.
Resumo:
A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.
Resumo:
We present a method for optical encryption of information, based on the time-dependent dynamics of writing and erasure of refractive index changes in a bulk lithium niobate medium. Information is written into the photorefractive crystal with a spatially amplitude modulated laser beam which when overexposed significantly degrades the stored data making it unrecognizable. We show that the degradation can be reversed and that a one-to-one relationship exists between the degradation and recovery rates. It is shown that this simple relationship can be used to determine the erasure time required for decrypting the scrambled index patterns. In addition, this method could be used as a straightforward general technique for determining characteristic writing and erasure rates in photorefractive media.
Resumo:
This paper examines the use of short video tutorials in a post-graduate accounting subject, as a means of helping students transition from dependent to more independent learners. Five short (three to five minute) video tutorials were introduced in an effort to shift the reliance for learning from the lecturer to the student. Students’ usage of video tutorials, comments by students, and reliance on teaching staff for individual assistance were monitored over three semesters from 2008 to 2009. Interviews with students were then conducted in late 2009 to more comprehensively evaluate the use and benefits of video tutorials. Findings reveal preliminary but positive outcomes in terms of both more efficient teaching and more effective learning.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
There is an increasing interest in the use of information technology as a participatory planning tool, particularly the use of geographical information technologies to support collaborative activities such as community mapping. However, despite their promise, the introduction of such technologies does not necessarily promote better participation nor improve collaboration. In part this can be attributed to a tendency for planners to focus on the technical considerations associated with these technologies at the expense of broader participation considerations. In this paper we draw on the experiences of a community mapping project with disadvantaged communities in suburban Australia to highlight the importance of selecting tools and techniques which support and enhance participatory planning. This community mapping project, designed to identify and document community-generated transport issues and solutions, had originally intended to use cadastral maps extracted from the government’s digital cadastral database as the foundation for its community mapping approach. It was quickly discovered that the local residents found the cadastral maps confusing as the maps lacked sufficient detail to orient them to their suburb (the study area). In response to these concerns and consistent with the project’s participatory framework, a conceptual base map based on resident’s views of landmarks of local importance was developed to support the community mapping process. Based on this community mapping experience we outline four key lessons learned regarding the process of community mapping and the place of geographical information technologies within this process.
Resumo:
While the communicative turn in policy-making has encouraged the public deliberation of policy decisions it has arguably had a more limited impact on the ability of public processes to deal with wicked problems. Wicked policy problems are characterised by high levels of complexity, uncertainty and divergence of values. However, some wicked problems present the additional challenge of high levels of psychosocial sensitivity and verbal proscription. Because these unspeakable policy problems frequently involve a significant moral dimension, the regulation of intimate processes or bodies, and strong elements of abjection and symbolic pollution they are quite literally problems that we don’t like to think about or talk about. However, the potential environmental and social impacts of these problems require that they be addressed. In this paper I present the preliminary findings of a research project focussed on the idea of the unspeakable policy problem and how its unspeakable nature can impact upon public participation and policy and environmental outcomes.
Resumo:
The formalin test is increasingly applied as a model of inflammatory pain using high formalin concentrations (5–15%). However, little is known about the effects of low formalin concentrations on related behavioural responses. To examine this, rat pups were subjected to various concentrations of formalin at four developmental stages: 7, 13, 22, and 82 days of age. At postnatal day (PND) 7, sex differences in flinching but not licking responses were observed with 0.5% formalin evoking higher flinching in males than in females. A dose response was evident in that 0.5% formalin also produced higher licking responses compared to 0.3% or 0.4% formalin. At PND 13, a concentration of 0.8% formalin evoked a biphasic response. At PND 22, a concentration of 1.1% evoked higher flinching and licking responses during the late phase (10–30 min) in both males and females. During the early phase (0–5 min), 1.1% evoked higher licking responses compared to 0.9% or 1% formalin. 1.1% formalin produced a biphasic response that was not evident with 0.9 or 1%. At PND 82, rats displayed a biphasic pattern in response to three formalin concentrations (1.25%, 1.75% and 2.25%) with the presence of an interphase for both 1.75% and 2.25% but not for 1.25%. These data suggest that low formalin concentrations induce fine-tuned responses that are not apparent with the high formalin concentration commonly used in the formalin test. These data also show that the developing nociceptive system is very sensitive to subtle changes in formalin concentrations.
Resumo:
Recently, it has been suggested osteocytes control the activities of bone formation (osteoblasts) and resorption (osteoclast), indicating their important regulatory role in bone remodelling. However, to date, the role of osteocytes in controlling bone vascularisation remains unknown. Our aim was to investigate the interaction between endothelial cells and osteocytes and to explore the possible molecular mechanisms during angiogenesis. To model osteocyte/endothelial cell interactions, we co-cultured osteocyte cell line (MLOY4) with endothelial cell line (HUVECs). Co-cultures were performed in 1:1 mixture of osteocytes and endothelial cells or by using the conditioned media (CM) transfer method. Real-time cell migration of HUVECs was measured with the transwell migration assay and xCELLigence system. Expression levels of angiogenesis- related genes were measured by quantitative real-time polymerase chain reaction (qRT-PCR). The effect of vascular endothelial growth factor (VEGF) and mitogen-activated phosphorylated kinase (MAPK) signaling were monitored by western blotting using relevant antibodies and inhibitors. During the bone formation, it was noted that osteocyte dendritic processes were closely connected to the blood vessels. The CM generated from MLOY4 cells-activated proliferation, migration, tube-like structure formation, and upregulation of angiogenic genes in endothelial cells suggesting that secretory factor(s) from osteocytes could be responsible for angiogenesis. Furthermore, we identified that VEGF secreted from MLOY4-activated VEGFR2–MAPK–ERK-signaling pathways in HUVECs. Inhibiting VEGF and/or MAPK–ERK pathways abrogated osteocyte-mediated angiogenesis in HUVEC cells. Our data suggest an important role of osteocytes in regulating angiogenesis.
Resumo:
The electrochemical and electrocatalytic behaviour of silver nanoprisms, nanospheres and nanocubes of comparable size in an alkaline medium have been investigated to ascertain the shape dependent behaviour of silver nanoparticles, which are an extensively studied nanomaterial. The nanomaterials were synthesised using chemical methods and characterised with UV-visible spectroscopy, transmission electron microscopy and X-ray diffraction. The nanomaterials were immobilised on a substrate glassy carbon electrode and characterised by cyclic voltammetry for their surface oxide electrochemistry. The electrocatalytic oxidation of hydrazine and formaldehyde and the reduction of hydrogen peroxide were studied by performing cyclic voltammetric and chronoamperometric experiments for both the nanomaterials and a smooth polycrystalline macrosized silver electrode. In all cases the nanomaterials showed enhanced electrocatalytic activity over the macro-silver electrode. Significantly, the silver nanoprisms that are rich in hcp lamellar defects showed greater activity than nanospheres and nanocubes for all reactions studied.
Resumo:
Vaccination assurance was developed in 1980s to increase vaccine uptake. However, there have been problems in the concept, scope, management and claim. Possible solutions include regulating vaccination fee and developing vaccination insurance. Abstract in Chinese 计划免疫保偿制是20世纪80年代初我国卫生事业改革中出现的一种新生事物,为促进计划免疫工作开展发挥了重要作用.但随着人们对健康需求的不断提高和计划免疫工作的深入开展,计划免疫保偿制中的许多内容已经不适应当前的预防接种工作,亟待规范和提高.
Resumo:
Introduction: Malignant pleural mesothelioma (MPM) is a rapidly fatal malignancy that is increasing in incidence. The caspase 8 inhibitor FLIP is an anti-apoptotic protein over-expressed in several cancer types including MPM. The histone deacetylase (HDAC) inhibitor Vorinostat (SAHA) is currently being evaluated in relapsed mesothelioma. We examined the roles of FLIP and caspase 8 in regulating SAHA-induced apoptosis in MPM. Methods: The mechanism of SAHA-induced apoptosis was assessed in 7 MPM cell lines and in a multicellular spheroid model. SiRNA and overexpression approaches were used, and cell death was assessed by flow cytometry, Western blotting and clonogenic assays. Results: RNAi-mediated FLIP silencing resulted in caspase 8-dependent apoptosis in MPM cell line models. SAHA potently down-regulated FLIP protein expression in all 7 MPM cell lines and in a multicellular spheroid model of MPM. In 6/7 MPM cell lines, SAHA treatment resulted in significant levels of apoptosis induction. Moreover, this apoptosis was caspase 8-dependent in all six sensitive cell lines. SAHA-induced apoptosis was also inhibited by stable FLIP overexpression. In contrast, down-regulation of HR23B, a candidate predictive biomarker for HDAC inhibitors, significantly inhibited SAHA-induced apoptosis in only 1/6 SAHA-sensitive MPM cell lines. Analysis of MPM patient samples demonstrated significant inter-patient variations in FLIP and caspase 8 expressions. In addition, SAHA enhanced cisplatin-induced apoptosis in a FLIP-dependent manner. Conclusions: These results indicate that FLIP is a major target for SAHA in MPM and identifies FLIP, caspase 8 and associated signalling molecules as candidate biomarkers for SAHA in this disease. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Re-programming of gene expression is fundamental for skeletal muscle adaptations in response to endurance exercise. This study investigated the time-course dependent changes in the muscular transcriptome following an endurance exercise trial consisting of 1 h of intense cycling immediately followed by 1 h of intense running. Skeletal muscle samples were taken at baseline, 3 h, 48 h, and 96 h post-exercise from eight healthy, endurance-trained, male individuals. RNA was extracted from muscle. Differential gene expression was evaluated using Illumina microarrays and validated with qPCR. Gene set enrichment analysis identified enriched molecular signatures chosen from the Molecular Signatures Database. Three h post-exercise, 102 gene sets were up-regulated [family wise error rate (FWER), P < 0.05]; including groups of genes related with leukocyte migration, immune and chaperone activation, and cyclic AMP responsive element binding protein (CREB) 1-signaling. Forty-eight h post-exercise, among 19 enriched gene sets (FWER, P < 0.05), two gene sets related to actin cytoskeleton remodeling were up-regulated. Ninety-six h post-exercise, 83 gene sets were enriched (FWER, P < 0.05), 80 of which were up-regulated; including gene groups related to chemokine signaling, cell stress management, and extracellular matrix remodeling. These data provide comprehensive insights into the molecular pathways involved in acute stress, recovery, and adaptive muscular responses to endurance exercise. The novel 96 h post-exercise transcriptome indicates substantial transcriptional activity, potentially associated with the prolonged presence of leukocytes in the muscles. This suggests that muscular recovery, from a transcriptional perspective, is incomplete 96 h after endurance exercise involving muscle damage.
Resumo:
The method of generalized estimating equations (GEE) is a popular tool for analysing longitudinal (panel) data. Often, the covariates collected are time-dependent in nature, for example, age, relapse status, monthly income. When using GEE to analyse longitudinal data with time-dependent covariates, crucial assumptions about the covariates are necessary for valid inferences to be drawn. When those assumptions do not hold or cannot be verified, Pepe and Anderson (1994, Communications in Statistics, Simulations and Computation 23, 939–951) advocated using an independence working correlation assumption in the GEE model as a robust approach. However, using GEE with the independence correlation assumption may lead to significant efficiency loss (Fitzmaurice, 1995, Biometrics 51, 309–317). In this article, we propose a method that extracts additional information from the estimating equations that are excluded by the independence assumption. The method always includes the estimating equations under the independence assumption and the contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We apply the method to a longitudinal study of the health of a group of Filipino children.