997 resultados para Trials (Piracy)--Massachusetts--Boston


Relevância:

20.00% 20.00%

Publicador:

Resumo:

While ATM bandwidth-reservation techniques are able to offer the guarantees necessary for the delivery of real-time streams in many applications (e.g. live audio and video), they suffer from many disadvantages that make them inattractive (or impractical) for many others. These limitations coupled with the flexibility and popularity of TCP/IP as a best-effort transport protocol have prompted the network research community to propose and implement a number of techniques that adapt TCP/IP to the Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services in ATM network environments. This allows these environments to smoothly integrate (and make use of) currently available TCP-based applications and services without much (if any) modifications. However, recent studies have shown that TCP/IP, when implemented over ATM networks, is susceptible to serious performance limitations. In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We start with an overview of the protocol. Next, we analytically characterize the dynamic redundancy control features of TCP Boston. Next, We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Content providers often consider the costs of security to be greater than the losses they might incur without it; many view "casual piracy" as their main concern. Our goal is to provide a low cost defense against such attacks while maintaining rigorous security guarantees. Our defense is integrated with and leverages fast forward error correcting codes, such as Tornado codes, which are widely used to facilitate reliable delivery of rich content. We tune one such family of codes - while preserving their original desirable properties - to guarantee that none of the original content can b e recovered whenever a key subset of encoded packets is missing. Ultimately we encrypt only these key codewords (only 4% of all transmissions), making the security overhead negligible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigated adaptive neural control of precision grip forces during object lifting. A model is presented that adjusts reactive and anticipatory grip forces to a level just above that needed to stabilize lifted objects in the hand. The model obeys priciples of cerebellar structure and function by using slip sensations as error signals to adapt phasic motor commands to tonic force generators associated with output synergies controlling grip aperture. The learned phasic commands are weight and texture-dependent. Simulations of the new curcuit model reproduce key aspects of experimental observations of force application. Over learning trials, the onset of grip force buildup comes to lead the load force buildup, and the rate-of-rise of grip force, but not load force, scales inversely with the friction of the gripped object.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis interrogates the construction of fairness to the accused in historic child sexual abuse trials in Ireland. The protection of fairness is a requirement of any trial that claims to adhere to the rule of law. Historic child sexual abuse trials, in which the charges relate to events that are alleged to have taken place decades previously, present serious challenges to the ability of the trial process to safeguard fairness. They are a litmus test of the courts’ commitment to fairness. The thesis finds that in historic abuse trials fairness to the accused has been significantly eroded and that therefore the Irish Courts have failed to respect the core of the rule of law in these most serious of prosecutions. The thesis scrutinises two bodies of case law, both of which deal with the issue of whether evidence should reach the jury. First, it examines the decisions on applications brought by defendants seeking to prohibit their trial. The courts hearing prohibition applications face a dilemma: how to ensure the defendant is not put at risk of an unfair trial, while at the same time recognising that delay in reporting is a defining feature of these cases. The thesis traces the development of the prohibition case law and tracks the shifting interpretations given to fairness by the courts. Second, the thesis examines what fairness means in the superior courts’ decisions regarding the admissibility of the following kinds of evidence, each of which presents particular challenges to the ability of the trial to safeguard fairness: evidence of multiple complainants; evidence of recovered memories and evidence of complainants’ therapeutic records. The thesis finds that in both bodies of case law the Irish courts have hollowed out the meaning of fairness. It makes proposals on how fairness might be placed at the heart of courts’ decisions on admissibility in historic abuse trials. The thesis concludes that the erosion of fairness in historic abuse trials is indicative of a move away from the liberal model of criminal justice. It cautions that unless fairness is prioritised in historic child sexual abuse trials the legitimacy of these trials and that of all Irish criminal trials will be contestable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis assesses the current regulatory framework regarding clinical trials with neonates in Ireland from a children’s rights perspective, as derived from the UN Convention on the Rights of the Child 1989 (UN CRC) and its supporting instruments. The focus on neonates in the thesis is due to the particular need for clinical research with this group of children, their dependency on others for their protection and the lack of attention which has been given to them in the regulatory framework. The importance of children’s rights in this area is linked to the role of human rights in the regulation of clinical research in general. A rights-based approach is of great practical relevance in reforming law, policy and practice. For example, the CRC contains a set of commonly agreed legal benchmarks which can be used to assess the current framework and shape recommendations for reform. In this way, it provides a set of binding norms under international law, which must be complied with by states and state actors in all law, policy and practice affecting children. However, the contribution which a children’s rights approach could make to the regulation of research with children has not, to date, been explored in detail. This thesis aims to address this gap by developing a set of children’s rights-based benchmarks, which are used to assess the Irish regulatory framework for clinical trials with neonates and to develop recommendations for reform. The purpose of the analysis and recommendations is to assess Ireland’s compliance with international children’s rights law in the area and to analyse the potential of children’s rights to effectively address inadequacies in the Irish framework. The recommendations ultimately aim to develop a framework which will enhance the protection of neonates’ rights in this important area of children’s lives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liver metastases have long been known to indicate an unfavourable disease course in breast cancer (BC). However, a small subset of patients with liver metastases alone who were treated with pre-taxane chemotherapy regimens was reported to have longer survival compared with patients with liver and metastases at other sites. In the present study, we examined the clinical outcome of breast cancer patients with liver metastases alone in the context of two phase III European Organisation for Research and Treatment of Cancer (EORTC) trials which compared the efficacy of doxorubicin (A) versus paclitaxel (T) (trial 10923) and of AC (cyclophosphamide) versus AT (trial 10961), given as first-line chemotherapy in metastatic BC patients. The median follow-up for the patients with liver metastases was 90.5 months in trial 10923 and 56.6 months in trial 10961. Patients with liver metastases alone comprised 18% of all patients with liver metastases, in both the 10923 and 10961 trials. The median survival of patients with liver metastases alone and liver plus other sites of metastases were 22.7 and 14.2 months (log rank test, P=0.002) in trial 10923 and 27.1 and 16.8 months (log rank test, P=0.19) in trial 10961. The median TTP (time to progression) for patients with liver metastases alone was also longer compared with the liver plus other sites of metastases group in both trials: 10.2 versus 8.8 months (log rank test, P=0.02) in trial 10923 and 8.3 versus 6.7 months (log rank test, P=0.37) in trial 10961. Most patients with liver metastases alone have progression of their disease in their liver again (96 and 60% of patients in trials 10923 and 10961, respectively). Given the high prevalence of breast cancer, improved detection of liver metastases, encouraging survival achieved with currently available cytotoxic agents and the fact that a significant portion of patients with liver metastases alone have progression of their tumour in the liver again, a more aggressive multimodality treatment approach through prospective clinical trials seems worth exploring in this specific subset of women.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: Side-effects of standard pain medications can limit their use. Therefore, nonpharmacologic pain relief techniques such as auriculotherapy may play an important role in pain management. Our aim was to conduct a systematic review and meta-analysis of studies evaluating auriculotherapy for pain management. DESIGN: MEDLINE,(®) ISI Web of Science, CINAHL, AMED, and Cochrane Library were searched through December 2008. Randomized trials comparing auriculotherapy to sham, placebo, or standard-of-care control were included that measured outcomes of pain or medication use and were published in English. Two (2) reviewers independently assessed trial eligibility, quality, and abstracted data to a standardized form. Standardized mean differences (SMD) were calculated for studies using a pain score or analgesic requirement as a primary outcome. RESULTS: Seventeen (17) studies met inclusion criteria (8 perioperative, 4 acute, and 5 chronic pain). Auriculotherapy was superior to controls for studies evaluating pain intensity (SMD, 1.56 [95% confidence interval (CI): 0.85, 2.26]; 8 studies). For perioperative pain, auriculotherapy reduced analgesic use (SMD, 0.54 [95% CI: 0.30, 0.77]; 5 studies). For acute pain and chronic pain, auriculotherapy reduced pain intensity (SMD for acute pain, 1.35 [95% CI: 0.08, 2.64], 2 studies; SMD for chronic pain, 1.84 [95% CI: 0.60, 3.07], 5 studies). Removal of poor quality studies did not alter the conclusions. Significant heterogeneity existed among studies of acute and chronic pain, but not perioperative pain. CONCLUSIONS: Auriculotherapy may be effective for the treatment of a variety of types of pain, especially postoperative pain. However, a more accurate estimate of the effect will require further large, well-designed trials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: A Royal Statistical Society Working Party recently recommended that "Greater use should be made of numerical, as opposed to verbal, descriptions of risk" in first-in-man clinical trials. This echoed the view of many clinicians and psychologists about risk communication. As the clinical trial industry expands rapidly across the globe, it is important to understand risk communication in Asian countries. METHODS: We conducted a cognitive experiment about participation in a hypothetical clinical trial of a pain relief medication and a survey in cancer and arthritis patients in Singapore. In part 1 of the experiment, the patients received information about the risk of side effects in one of three formats (frequency, percentage and verbal descriptor) and in one of two sequences (from least to most severe and from most to least severe), and were asked about their willingness to participate. In part 2, the patients received information about the risk in all three formats, in the same sequence, and were again asked about their willingness to participate. A survey of preference for risk presentation methods and usage of verbal descriptors immediately followed. RESULTS: Willingness to participate and the likelihood of changing one's decision were not affected by the risk presentation methods. Most patients indicated a preference for the frequency format, but patients with primary school or no formal education were indifferent. While the patients used the verbal descriptors "very common", "common" and "very rare" in ways similar to the European Commission's Guidelines, their usage of the descriptors "uncommon" and "rare" was substantially different from the EU's. CONCLUSION: In this sample of Asian cancer and arthritis patients, risk presentation format had no impact on willingness to participate in a clinical trial. However, there is a clear preference for the frequency format. The lay use of verbal descriptors was substantially different from the EU's.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: With the globalization of clinical trials, large developing nations have substantially increased their participation in multi-site studies. This participation has raised ethical concerns, among them the fear that local customs, habits and culture are not respected while asking potential participants to take part in study. This knowledge gap is particularly noticeable among Indian subjects, since despite the large number of participants, little is known regarding what factors affect their willingness to participate in clinical trials. METHODS: We conducted a meta-analysis of all studies evaluating the factors and barriers, from the perspective of potential Indian participants, contributing to their participation in clinical trials. We searched both international as well as Indian-specific bibliographic databases, including Pubmed, Cochrane, Openjgate, MedInd, Scirus and Medknow, also performing hand searches and communicating with authors to obtain additional references. We enrolled studies dealing exclusively with the participation of Indians in clinical trials. Data extraction was conducted by three researchers, with disagreement being resolved by consensus. RESULTS: Six qualitative studies and one survey were found evaluating the main themes affecting the participation of Indian subjects. Themes included Personal health benefits, Altruism, Trust in physicians, Source of extra income, Detailed knowledge, Methods for motivating participants as factors favoring, while Mistrust on trial organizations, Concerns about efficacy and safety of trials, Psychological reasons, Trial burden, Loss of confidentiality, Dependency issues, Language as the barriers. CONCLUSION: We identified factors that facilitated and barriers that have negative implications on trial participation decisions in Indian subjects. Due consideration and weightage should be assigned to these factors while planning future trials in India.