957 resultados para Module MAPK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

To evaluate the timing of mutations in BRAF (v-raf murine sarcoma viral oncogene homolog B1) during melanocytic neoplasia, we carried out mutation analysis on microdissected melanoma and nevi samples. We observed mutations resulting in the V599E amino-acid substitution in 41 of 60 (68%) melanoma metastases, 4 of 5 (80%) primary melanomas and, unexpectedly, in 63 of 77 (82%) nevi. These data suggest that mutational activation of the RAS/RAF/MAPK pathway in nevi is a critical step in the initiation of melanocytic neoplasia but alone is insufficient for melanoma tumorigenesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have used microarray gene expression profiling and machine learning to predict the presence of BRAF mutations in a panel of 61 melanoma cell lines. The BRAF gene was found to be mutated in 42 samples (69%) and intragenic mutations of the NRAS gene were detected in seven samples (11%). No cell line carried mutations of both genes. Using support vector machines, we have built a classifier that differentiates between melanoma cell lines based on BRAF mutation status. As few as 83 genes are able to discriminate between BRAF mutant and BRAF wild-type samples with clear separation observed using hierarchical clustering. Multidimensional scaling was used to visualize the relationship between a BRAF mutation signature and that of a generalized mitogen-activated protein kinase (MAPK) activation (either BRAF or NRAS mutation) in the context of the discriminating gene list. We observed that samples carrying NRAS mutations lie somewhere between those with or without BRAF mutations. These observations suggest that there are gene-specific mutation signals in addition to a common MAPK activation that result from the pleiotropic effects of either BRAF or NRAS on other signaling pathways, leading to measurably different transcriptional changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The action potential (ap) of a cardiac cell is made up of a complex balance of ionic currents which flow across the cell membrane in response to electrical excitation of the cell. Biophysically detailed mathematical models of the ap have grown larger in terms of the variables and parameters required to model new findings in subcellular ionic mechanisms. The fitting of parameters to such models has seen a large degree of parameter and module re-use from earlier models. An alternative method for modelling electrically exciteable cardiac tissue is a phenomenological model, which reconstructs tissue level ap wave behaviour without subcellular details. A new parameter estimation technique to fit the morphology of the ap in a four variable phenomenological model is presented. An approximation of a nonlinear ordinary differential equation model is established that corresponds to the given phenomenological model of the cardiac ap. The parameter estimation problem is converted into a minimisation problem for the unknown parameters. A modified hybrid Nelder–Mead simplex search and particle swarm optimization is then used to solve the minimisation problem for the unknown parameters. The successful fitting of data generated from a well known biophysically detailed model is demonstrated. A successful fit to an experimental ap recording that contains both noise and experimental artefacts is also produced. The parameter estimation method’s ability to fit a complex morphology to a model with substantially more parameters than previously used is established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Degradative enzymes, such as A disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS) and matrix metalloproteinases (MMPs), play key roles in osteoarthritis (OA) development. The aim of the present study was to investigate if cross-talk between subchondral bone osteoblasts (SBOs) and articular cartilage chondrocytes (ACCs) in OA alters the expression and regulation of ADAMTS5, ADAMTS4, MMP-1, MMP-2, MMP-3, MMP-8, MMP-9 and MMP-13, and also to test the possible involvement of mitogen activated protein kinase (MAPK) signaling pathway during this process. Methods: ACCs and SBOs were isolated from normal and OA patients. An in vitro co-culture model was developed to study the regulation of ADAMTS and MMPs under normal and OA joint cross-talk conditions. MAPK-ERK inhibitor, PD98059 was applied to delineate the involvement of specific pathway during this interaction process. Results: Indirect co-culture of OA SBOs with normal ACCs resulted in significantly increased expression of ADAMTS5, ADAMTS4, MMP-2, MMP-3 and MMP-9 in ACCs, whereas co-culture of OA ACCs led to increased MMP-1 and MMP-2 expression in normal SBOs. The upregulation of ADAMTS and MMPs under these conditions was correlated with activation of the MAPK-ERK1/2 signaling pathway and the addition of the MAPK-ERK inhibitor, PD98059, reversed the overexpression of ADAMTS and MMPs in co-cultures. Conclusion: In summary, we believe, these results add to the evidence that in human OA, altered bi-directional signals transmitted between SBOs and ACCs significantly impacts the critical features of both cartilage and bone by producing abnormal levels of ADAMTS and MMPs. Furthermore, we have demonstrated for the first time that this altered cross-talk was mediated by the phosphorylation of MAPK-ERK1/2 signaling pathway.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The World Health Organization recommends that data on mortality in its member countries are collected utilising the Medical Certificate of Cause of Death published in the instruction volume of the ICD-10. However, investment in health information processes necessary to promote the use of this certificate and improve mortality information is lacking in many countries. An appeal for support to make improvements has been launched through the Health Metrics Network’s MOVE-IT strategy (Monitoring of Vital Events – Information Technology) [World Health Organization, 2011]. Despite this international spotlight on the need for capture of mortality data and in the use of the ICD-10 to code the data reported on such certificates, there is little cohesion in the way that certifiers of deaths receive instruction in how to complete the death certificate, which is the main source document for mortality statistics. Complete and accurate documentation of the immediate, underlying and contributory causes of death of the decedent on the death certificate is a requirement to produce standardised statistical information and to the ability to produce cause-specific mortality statistics that can be compared between populations and across time. This paper reports on a research project conducted to determine the efficacy and accessibility of the certification module of the WHO’s newly-developed web based training tool for coders and certifiers of deaths. Involving a population of medical students from the Fiji School of Medicine and a pre and post research design, the study entailed completion of death certificates based on vignettes before and after access to the training tool. The ability of the participants to complete the death certificates and analysis of the completeness and specificity of the ICD-10 coding of the reported causes of death were used to measure the effect of the students’ learning from the training tool. The quality of death certificate completion was assessed using a Quality Index before and after the participants accessed the training tool. In addition, the views of the participants about accessibility and use of the training tool were elicited using a supplementary questionnaire. The results of the study demonstrated improvement in the ability of the participants to complete death certificates completely and accurately according to best practice. The training tool was viewed very positively and its implementation in the curriculum for medical students was encouraged. Participants also recommended that interactive discussions to examine the certification exercises would be an advantage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Professional Development module video interview What does the principle of inclusive practice look/sound/feel like in the early years setting? (7min09sec; 15 MB) What do you see as the role of the teacher and support personnel in terms of inclusive practice? Why is collaboration so important? (3min; 6 MB) What communication strategies would help support inclusive practices with parents? (4min; 9 MB)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is not uncommon for enterprises today to be faced with the demand to integrate and incor- porate many different and possibly heterogeneous systems which are generally independently designed and developed, to allow seamless access. In effect, the integration of these systems results in one large whole system that must be able, at the same time, to maintain the local autonomy and to continue working as an independent entity. This problem has introduced a new distributed architecture called federated systems. The most challenging issue in federated systems is to find answers for the question of how to efficiently cooperate while preserving their autonomous characteristic, especially the security autonomy. This thesis intends to address this issue. The thesis reviews the evolution of the concept of federated systems and discusses the organisational characteristics as well as remaining security issues with the existing approaches. The thesis examines how delegation can be used as means to achieve better security, especially authorisation while maintaining autonomy for the participating member of the federation. A delegation taxonomy is proposed as one of the main contributions. The major contribution of this thesis is to study and design a mechanism to support dele- gation within and between multiple security domains with constraint management capability. A novel delegation framework is proposed including two modules: Delegation Constraint Man- agement module and Policy Management module. The first module is designed to effectively create, track and manage delegation constraints, especially for delegation processes which require re-delegation (indirect delegation). The first module employs two algorithms to trace the root authority of a delegation constraint chain and to prevent the potential conflict when creating a delegation constraint chain if necessary. The first module is designed for conflict prevention not conflict resolution. The second module is designed to support the first module via the policy comparison capability. The major function of this module is to provide the delegation framework the capability to compare policies and constraints (written under the format of a policy). The module is an extension of Lin et al.'s work on policy filtering and policy analysis. Throughout the thesis, some case studies are used as examples to illustrate the discussed concepts. These two modules are designed to capture one of the most important aspects of the delegation process: the relationships between the delegation transactions and the involved constraints, which are not very well addressed by the existing approaches. This contribution is significant because the relationships provide information to keep track and en- force the involved delegation constraints and, therefore, play a vital role in maintaining and enforcing security for transactions across multiple security domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work focuses on the development of a stand-alone gas nanosensor node, powered by solar energy to track concentration of polluted gases such as NO2, N2O, and NH3. Gas sensor networks have been widely developed over recent years, but the rise of nanotechnology is allowing the creation of a new range of gas sensors [1] with higher performance, smaller size and an inexpensive manufacturing process. This work has created a gas nanosensor node prototype to evaluate future field performance of this new generation of sensors. The sensor node has four main parts: (i) solar cells; (ii) control electronics; (iii) gas sensor and sensor board interface [2-4]; and (iv) data transmission. The station is remotely monitored through wired (ethernet cable) or wireless connection (radio transmitter) [5, 6] in order to evaluate, in real time, the performance of the solar cells and sensor node under different weather conditions. The energy source of the node is a module of polycrystalline silicon solar cells with 410cm2 of active surface. The prototype is equipped with a Resistance-To-Period circuit [2-4] to measure the wide range of resistances (KΩ to GΩ) from the sensor in a simple and accurate way. The system shows high performance on (i) managing the energy from the solar panel, (ii) powering the system load and (iii) recharging the battery. The results show that the prototype is suitable to work with any kind of resistive gas nanosensor and provide useful data for future nanosensor networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mutations in multiple oncogenes including KRAS, CTNNB1, PIK3CA and FGFR2 have been identified in endometrial cancer. The aim of this study was to provide insight into the clinicopathological features associated with patterns of mutation in these genes, a necessary step in planning targeted therapies for endometrial cancer. 466 endometrioid endometrial tumors were tested for mutations in FGFR2, KRAS, CTNNB1, and PIK3CA. The relationships between mutation status, tumor microsatellite instability (MSI) and clinicopathological features including overall survival (OS) and disease-free survival (DFS) were evaluated using Kaplan-Meier survival analysis and Cox proportional hazard models. Mutations were identified in FGFR2 (48/466); KRAS (87/464); CTNNB1 (88/454) and PIK3CA (104/464). KRAS and FGFR2 mutations were significantly more common, and CTNNB1 mutations less common, in MSI positive tumors. KRAS and FGFR2 occurred in a near mutually exclusive pattern (p = 0.05) and, surprisingly, mutations in KRAS and CTNNB1 also occurred in a near mutually exclusive pattern (p = 0.0002). Multivariate analysis revealed that mutation in KRAS and FGFR2 showed a trend (p = 0.06) towards longer and shorter DFS, respectively. In the 386 patients with early stage disease (stage I and II), FGFR2 mutation was significantly associated with shorter DFS (HR = 3.24; 95% confidence interval, CI, 1.35-7.77; p = 0.008) and OS (HR = 2.00; 95% CI 1.09-3.65; p = 0.025) and KRAS was associated with longer DFS (HR = 0.23; 95% CI 0.05-0.97; p = 0.045). In conclusion, although KRAS and FGFR2 mutations share similar activation of the MAPK pathway, our data suggest very different roles in tumor biology. This has implications for the implementation of anti-FGFR or anti-MEK biologic therapies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This sociological introduction provides a much-needed textbook for an increasingly popular area of study. Written by a team of authors with a broad range of teaching and individual expertise, it covers almost every module offered in UK criminological courses and will be valuable to students of criminology worldwide. It covers: - key traditions in criminology, their critical assessment and more recent developments; - new ways of thinking about crime and control, including crime and emotions, drugs and alcohol, from a public health perspective; - different dimensions of the problem of crime and misconduct, including crime and sexuality, crimes against the environment, crime and human rights and organizational deviance; - key debates in criminological theory; - the criminal justice system; - new areas such as the globalization of crime, and crime in cyberspace. Specially designed to be user-friendly, each chapter contains boxed material on current controversies, key thinkers and examples of crime and criminal justice around the world with statistical tables, maps, summaries, critical thinking questions, annotated references and a glossary of key terms, as well as further reading sections and additional resource information as weblinks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aspect orientation is an important approach to address complexity of cross-cutting concerns in Information Systems. This approach encapsulates these concerns separately and compose them to the main module when needed. Although there a different works which shows how this separation should be performed in process models, the composition of them is an open area. In this paper, we demonstrate the semantics of a service which enables this composition. The result can also be used as a blueprint to implement the service to support aspect orientation in Business Process Management area.