912 resultados para Work Systems
Resumo:
Colorectal cancer is a heterogeneous disease that manifests through diverse clinical scenarios. During many years, our knowledge about the variability of colorectal tumors was limited to the histopathological analysis from which generic classifications associated with different clinical expectations are derived. However, currently we are beginning to understand that under the intense pathological and clinical variability of these tumors there underlies strong genetic and biological heterogeneity. Thus, with the increasing available information of inter-tumor and intra-tumor heterogeneity, the classical pathological approach is being displaced in favor of novel molecular classifications. In the present article, we summarize the most relevant proposals of molecular classifications obtained from the analysis of colorectal tumors using powerful high throughput techniques and devices. We also discuss the role that cancer systems biology may play in the integration and interpretation of the high amount of data generated and the challenges to be addressed in the future development of precision oncology. In addition, we review the current state of implementation of these novel tools in the pathological laboratory and in clinical practice.
Resumo:
In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.
Resumo:
Dialogic learning and interactive groups have proved to be a useful methodological approach appliedin educational situations for lifelong adult learners. The principles of this approach stress theimportance of dialogue and equal participation also when designing the training activities. This paperadopts these principles as the basis for a configurable template that can be integrated in runtimesystems. The template is formulated as a meta-UoL which can be interpreted by IMS Learning Designplayers. This template serves as a guide to flexibly select and edit the activities at runtime (on the fly).The meta-UoL has been used successfully by a practitioner so as to create a real-life example, withpositive and encouraging results
Resumo:
A contemporary perspective on the tradeoff between transmit antenna diversity andspatial multiplexing is provided. It is argued that, in the context of most modern wirelesssystems and for the operating points of interest, transmission techniques that utilizeall available spatial degrees of freedom for multiplexing outperform techniques that explicitlysacrifice spatial multiplexing for diversity. In the context of such systems, therefore,there essentially is no decision to be made between transmit antenna diversity and spatialmultiplexing in MIMO communication. Reaching this conclusion, however, requires thatthe channel and some key system features be adequately modeled and that suitable performancemetrics be adopted; failure to do so may bring about starkly different conclusions. Asa specific example, this contrast is illustrated using the 3GPP Long-Term Evolution systemdesign.
Resumo:
It is well known that multiple-input multiple-output (MIMO) techniques can bring numerous benefits, such as higher spectral efficiency, to point-to-point wireless links. More recently, there has been interest in extending MIMO concepts tomultiuser wireless systems. Our focus in this paper is on network MIMO, a family of techniques whereby each end user in a wireless access network is served through several access points within its range of influence. By tightly coordinating the transmission and reception of signals at multiple access points, network MIMO can transcend the limits on spectral efficiency imposed by cochannel interference. Taking prior information-theoretic analyses of networkMIMO to the next level, we quantify the spectral efficiency gains obtainable under realistic propagation and operational conditions in a typical indoor deployment. Our study relies on detailed simulations and, for specificity, is conducted largely within the physical-layer framework of the IEEE 802.16e Mobile WiMAX system. Furthermore,to facilitate the coordination between access points, we assume that a high-capacity local area network, such as Gigabit Ethernet,connects all the access points. Our results confirm that network MIMO stands to provide a multiple-fold increase in spectralefficiency under these conditions.
Resumo:
The attached plan builds upon work done over the last decade. The first plan developed after the creation of the Division of Criminal and Juvenile Justice Planning in 1986 was issued in 1990 and annually updated through 1994. Since 1992, the CJJPAC has been required to coordinate their planning activities with those of the Iowa Juvenile Justice Advisory Council (JJAC). In 1995, these two councils developed a new plan consisting of a set of long-range justice system goals to assist policy makers and justice system practitioners as they plan and operate the justice system through the next twenty years. The statutory mandate for such long-range planning required the identification of goals specific enough to provide guidance, but broad enough to be of relevance over a long period of time. The long-range goals adopted by these councils in 1995 covered a wide variety of topics and offered a framework within which current practices could be defined and assessed. Collectively, these long-range goals were meant to provide a single source of direction to the complex assortment of practitioners and policymakers whose individual concerns and decisions collectively define the nature and effectiveness of Iowa’s justice system. The twenty-year goals established in 1995 were reviewed by the councils in 2000 to assess their current relevance. It was determined that, with a few revisions, the goals established in 1995 should be restated in 2000 with a renewed emphasis on their long-range status. This plan builds upon those issued in 1995 and 2000, continuing much of the emphasis of plans, with some new directions charted as appropriate.
Resumo:
In this paper, we introduce a pilot-aided multipath channel estimator for Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) systems. Typical estimation algorithms assume the number of multipath components and delays to be known and constant, while theiramplitudes may vary in time. In this work, we focus on the more realistic assumption that also the number of channel taps is unknown and time-varying. The estimation problem arising from this assumption is solved using Random Set Theory (RST), which is a probability theory of finite sets. Due to the lack of a closed form of the optimal filter, a Rao-Blackwellized Particle Filter (RBPF) implementation of the channel estimator is derived. Simulation results demonstrate the estimator effectiveness.
Resumo:
Multiple-input multiple-output (MIMO) techniques have become an essential part of broadband wireless communications systems. For example, the recently developed IEEE 802.16e specifications for broadband wireless access include three MIMOprofiles employing 2×2 space-time codes (STCs), and two of these MIMO schemes are mandatory on the downlink of Mobile WiMAX systems. One of these has full rate, and the other has full diversity, but neither of them has both of the desired features. The third profile, namely, Matrix C, which is not mandatory, is both a full rate and a full diversity code, but it has a high decoder complexity. Recently, the attention was turned to the decodercomplexity issue and including this in the design criteria, several full-rate STCs were proposed as alternatives to Matrix C. In this paper, we review these different alternatives and compare them to Matrix C in terms of performances and the correspondingreceiver complexities.
Resumo:
A contemporary perspective on the tradeoff between transmit antenna diversity and spatial multi-plexing is provided. It is argued that, in the context of modern cellular systems and for the operating points of interest, transmission techniques that utilize all available spatial degrees of freedom for multiplexingoutperform techniques that explicitly sacrifice spatialmultiplexing for diversity. Reaching this conclusion, however, requires that the channel and some key system features be adequately modeled; failure to do so may bring about starkly different conclusions. As a specific example, this contrast is illustrated using the 3GPP Long-Term Evolution system design.
Resumo:
The purpose of this paper is to examine (1) some of the models commonly used to represent fading,and (2) the information-theoretic metrics most commonly used to evaluate performance over those models. We raise the question of whether these models and metrics remain adequate in light of the advances that wireless systems haveundergone over the last two decades. Weaknesses are pointedout, and ideas on possible fixes are put forth.
Resumo:
The double spin-echo point resolved spectroscopy sequence (PRESS) is a widely used method and standard in clinical MR spectroscopy. Existence of important J-modulations at constant echo times, depending on the temporal delays between the rf-pulses, have been demonstrated recently for strongly coupled spin systems and were exploited for difference editing, removing singlets from the spectrum (strong-coupling PRESS, S-PRESS). A drawback of this method for in vivo applications is that large signal modulations needed for difference editing occur only at relatively long echo times. In this work we demonstrate that, by simply adding a third refocusing pulse (3S-PRESS), difference editing becomes possible at substantially shorter echo times while, as applied to citrate, more favorable lineshapes can be obtained. For the example of an AB system an analytical description of the MR signal, obtained with this triple refocusing sequence (3S-PRESS), is provided.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
Species of thrips (Insecta, Thysanoptera) in two strawberry production systems in Rio Grande do Sul, Brazil. Thrips are tiny insects responsible for the reduction of strawberry fruit quality. The work aimed to record and quantify the thysanopterofauna present in two strawberry production systems, low tunnel and semi-hydroponic. Leaves, flowers and fruits were collected weekly, from July 2005 to December 2006 in Caxias do Sul and Bom Princípio municipalities, RS. A total of 664 individuals were collected, representing two families, four genus and 10 species: Frankliniella occidentalis (Pergande, 1895), F. schultzei (Trybom, 1910), F. rodeos Moulton, 1933, F. simplex (Priesner, 1924), F. williamsi (Hood, 1915), F. gemina (Bagnall, 1919), Frankliniella sp., Thrips tabaci (Lindeman, 1888), Thrips tabaci (Lindeman, 1888), Caliothrips fasciatus (Pergande 1895) from Thripidae and Heterothrips sp. from Heterothripidae. Frankliniella occidentalis represented 89.7% of the samples with 95.8% of the species collected in flowers, 3.9% in fruits and 0.8% in leaves. The results show that flowers are the most important food resource for these insects on strawberry plants. Frankliniella rodeos, F. simplex, F. williamsi, C. fasciatus, and Heterothrips sp. are new records on strawberry for Brazil.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.