925 resultados para Large-group methods
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.
Resumo:
— Consideration of how people respond to the question What is this? has suggested new problem frontiers for pattern recognition and information fusion, as well as neural systems that embody the cognitive transformation of declarative information into relational knowledge. In contrast to traditional classification methods, which aim to find the single correct label for each exemplar (This is a car), the new approach discovers rules that embody coherent relationships among labels which would otherwise appear contradictory to a learning system (This is a car, that is a vehicle, over there is a sedan). This talk will describe how an individual who experiences exemplars in real time, with each exemplar trained on at most one category label, can autonomously discover a hierarchy of cognitive rules, thereby converting local information into global knowledge. Computational examples are based on the observation that sensors working at different times, locations, and spatial scales, and experts with different goals, languages, and situations, may produce apparently inconsistent image labels, which are reconciled by implicit underlying relationships that the network’s learning process discovers. The ARTMAP information fusion system can, moreover, integrate multiple separate knowledge hierarchies, by fusing independent domains into a unified structure. In the process, the system discovers cross-domain rules, inferring multilevel relationships among groups of output classes, without any supervised labeling of these relationships. In order to self-organize its expert system, the ARTMAP information fusion network features distributed code representations which exploit the model’s intrinsic capacity for one-to-many learning (This is a car and a vehicle and a sedan) as well as many-to-one learning (Each of those vehicles is a car). Fusion system software, testbed datasets, and articles are available from http://cns.bu.edu/techlab.
Resumo:
Stress can be understood in terms of the meaning of stressful experiences for individuals. The meaning of stressful experiences involves threats to self-adequacy, where self-adequacy is considered a basic human need. Appropriate research methods are required to explore this aspect of stress. The present study is a qualitative exploration of the stress experienced by a group of 27 students at the National Institute of Higher Education, Limerick (since renamed the University of Limerick). The study was carried out by the resident student counsellor at the college. A model of student stress was explored, based on student developmental needs. The data consist of a series of interviews recorded with each of the 27 students over a 3 month period. These interviews were transcribed and the resulting transcripts are the subject of detailed analysis. The analysis of the data is an account of the sense-making process by the student counsellor of the students' reported experiences. The aim of the analysis was to reduce the large amounts of data to their most salient aspects in an ordered fashion, so as to examine the application of a developmental model of stress with this group of students. There were two key elements to the analysis. First, the raw data were edited to identify the key statements contained in the interviews. Second, the statements were categorised, as a means of summarising the data. The results of the qualitative dataanalysis were then applied to the developmental model. The analysis of data revealed a number of patterns of stress amongst the sample of students. Patterns of academic over-identification, parental conflict and social inadequacy were particularly noteworthy. These patterns consisted of an integration of academic, family and social stresses within a developmental framework. Gender differences with regard to the need for separateness and belonging are highlighted. Appropriate student stress intervention strategies are discussed. Based on the present results, the relationship between stress and development has been highlighted and is recommended as a firm basis for future studies of stress in general and student stress in particular.
Resumo:
For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.
Resumo:
In this work, the properties of strained tetrahedrally bonded materials are explored theoretically, with special focus on group-III nitrides. In order to do so, a multiscale approach is taken: accurate quantitative calculations of material properties are carried out in a quantum first-principles frame, for small systems. These properties are then extrapolated and empirical methods are employed to make predictions for larger systems, such as alloys or nanostructures. We focus our attention on elasticity and electric polarization in semiconductors. These quantities serve as input for the calculation of the optoelectronic properties of these systems. Regarding the methods employed, our first-principles calculations use highly- accurate density functional theory (DFT) within both standard Kohn-Sham and generalized (hybrid functional) Kohn-Sham approaches. We have developed our own empirical methods, including valence force field (VFF) and a point-dipole model for the calculation of local polarization and local polarization potential. Our local polarization model gives insight for the first time to local fluctuations of the electric polarization at an atomistic level. At the continuum level, we have studied composition-engineering optimization of nitride nanostructures for built-in electrostatic field reduction, and have developed a highly efficient hybrid analytical-numerical staggered-grid computational implementation of continuum elasticity theory, that is used to treat larger systems, such as quantum dots.
Resumo:
This study explores the experiences of stress and burnout in Irish second level teachers and examines the contribution of a number of individual, environmental and health factors in burnout development. As no such study has previously been carried out with this sample, a mixed-methods approach was adopted in order to comprehensively investigate the subject matter. Teaching has consistently been identified as a particularly stressful occupation and research investigating its development is of great importance in developing measures to address the problem. The first phase of study involved the use of focus groups conducted with a total of 20 second-level teachers from 11 different schools in the greater Cork city area. Findings suggest that teachers experience a variety of stressors – in class, in the staff room and outside of school. The second phase of study employed a survey to examine the factors associated with burnout. Analysis of 192 responses suggested that burnout results from a combination of demographic, personality, environmental and coping factors. Burnout was also found to be associated with a number of physical symptoms, particularly trouble sleeping and fatigue. Findings suggest that interventions designed to reduce burnout must reflect the complexity of the problem and its development. Based on the research findings, interventions that combine individual and organisational approaches should provide the optimal chance of effectively tackling burnout.
Resumo:
The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.
Resumo:
Background: Many European countries including Ireland lack high quality, on-going, population based estimates of maternal behaviours and experiences during pregnancy. PRAMS is a CDC surveillance program which was established in the United States in 1987 to generate high quality, population based data to reduce infant mortality rates and improve maternal and infant health. PRAMS is the only on-going population based surveillance system of maternal behaviours and experiences that occur before, during and after pregnancy worldwide.Methods: The objective of this study was to adapt, test and evaluate a modified CDC PRAMS methodology in Ireland. The birth certificate file which is the standard approach to sampling for PRAMS in the United States was not available for the PRAMS Ireland study. Consequently, delivery record books for the period between 3 and 5 months before the study start date at a large urban obstetric hospital [8,900 births per year] were used to randomly sample 124 women. Name, address, maternal age, infant sex, gestational age at delivery, delivery method, APGAR score and birth weight were manually extracted from records. Stillbirths and early neonatal deaths were excluded using APGAR scores and hospital records. Women were sent a letter of invitation to participate including option to opt out, followed by a modified PRAMS survey, a reminder letter and a final survey.Results: The response rate for the pilot was 67%. Two per cent of women refused the survey, 7% opted out of the study and 24% did not respond. Survey items were at least 88% complete for all 82 respondents. Prevalence estimates of socially undesirable behaviours such as alcohol consumption during pregnancy were high [>50%] and comparable with international estimates.Conclusion: PRAMS is a feasible and valid method of collecting information on maternal experiences and behaviours during pregnancy in Ireland. PRAMS may offer a potential solution to data deficits in maternal health behaviour indicators in Ireland with further work. This study is important to researchers in Europe and elsewhere who may be interested in new ways of tailoring an established CDC methodology to their unique settings to resolve data deficits in maternal health.
Resumo:
Phages belonging to the 936 group represent one of the most prevalent and frequently isolated phages in dairy fermentation processes using Lactococcus lactis as the primary starter culture. In recent years extensive research has been carried out to characterise this phage group at a genomic level in an effort to understand how the 936 group phages dominate this particular niche and cause regular problems during large scale milk fermentations. This thesis describes a large scale screening of industrial whey samples, leading to the isolation of forty three genetically different lactococcal phages. Using multiplex PCR, all phages were identified as members of the 936 group. The complete genome of thirty eight of these phages was determined using next generation sequencing technologies which identified several regions of divergence. These included the structural region surrounding the major tail protein, the replication region as well as the genes involved in phage DNA packing. For a number of phages the latter genomic region was found to harbour genes encoding putative orphan methyltransferases. Using small molecule real time (SMRT) sequencing and heterologous gene expression, the target motifs for several of these MTases were determined and subsequently shown to actively protect phage DNA from restriction endonuclease activity. Comparative analysis of the thirty eight phages with fifty two previously sequenced members of this group showed that the core genome consists of 28 genes, while the non-core genome was found to fluctuate irrespective of geographical location or time of isolation. This study highlights the continued need to perform large scale characterisation of the bacteriophage populations infecting industrial fermentation facilities in effort to further our understanding dairy phages and ways to control their proliferation.
Resumo:
Introduction: Copayments for prescriptions are associated with decreased adherence to medicines resulting in increased health service utilisation, morbidity and mortality. In October 2010 a 50c copayment per prescription item was introduced on the General Medical Services (GMS) scheme in Ireland, the national public health insurance programme for low-income and older people. The copayment was increased to €1.50 per prescription item in January 2013. To date, the impact of these copayments on adherence to prescription medicines on the GMS scheme has not been assessed. Given that the GMS population comprises more than 40% of the Irish population, this presents an important public health problem. The aim of this thesis was to assess the impact of two prescription copayments, 50c and €1.50, on adherence to medicines.Methods: In Chapter 2 the published literature was systematically reviewed with meta-analysis to a) develop evidence on cost-sharing for prescriptions and adherence to medicines and b) develop evidence for an alternative policy option; removal of copayments. The core research question of this thesis was addressed by a large before and after longitudinal study, with comparator group, using the national pharmacy claims database. New users of essential and less-essential medicines were included in the study with sample sizes ranging from 7,007 to 136,111 individuals in different medication groups. Segmented regression was used with generalised estimating equations to allow for correlations between repeated monthly measurements of adherence. A qualitative study involving 24 individuals was conducted to assess patient attitudes towards the 50c copayment policy. The qualitative and quantitative findings were integrated in the discussion chapter of the thesis. The vast majority of the literature on this topic area is generated in North America, therefore a test of generalisability was carried out in Chapter 5 by comparing the impact of two similar copayment interventions on adherence, one in the U.S. and one in Ireland. The method used to measure adherence in Chapters 3 and 5 was validated in Chapter 6. Results: The systematic review with meta-analysis demonstrated an 11% (95% CI 1.09 to 1.14) increased odds of non-adherence when publicly insured populations were exposed to copayments. The second systematic review found moderate but variable improvements in adherence after removal/reduction of copayments in a general population. The core paper of this thesis found that both the 50c and €1.50 copayments on the GMS scheme were associated with larger reductions in adherence to less-essential medicines than essential medicines directly after the implementation of policies. An important exception to this pattern was observed; adherence to anti-depressant medications declined by a larger extent than adherence to other essential medicines after both copayments. The cross country comparison indicated that North American evidence on cost-sharing for prescriptions is not automatically generalisable to the Irish setting. Irish patients had greater immediate decreases of -5.3% (95% CI -6.9 to -3.7) and -2.8% (95% CI -4.9 to -0.7) in adherence to anti-hypertensives and anti-hyperlipidaemic medicines, respectively, directly after the policy changes, relative to their U.S. counterparts. In the long term, however, the U.S. and Irish populations had similar behaviours. The concordance study highlighted the possibility of a measurement bias occurring for the measurement of adherence to non-steroidal anti-inflammatory drugs in Chapter 3. Conclusions: This thesis has presented two reviews of international cost-sharing policies, an assessment of the generalisability of international evidence and both qualitative and quantitative examinations of cost-sharing policies for prescription medicines on the GMS scheme in Ireland. It was found that the introduction of a 50c copayment and its subsequent increase to €1.50 on the GMS scheme had a larger impact on adherence to less-essential medicines relative to essential medicines, with the exception of anti-depressant medications. This is in line with policy objectives to reduce moral hazard and is therefore demonstrative of the value of such policies. There are however some caveats. The copayment now stands at €2.50 per prescription item. The impact of this increase in copayment has yet to be assessed which is an obvious point for future research. Careful monitoring for adverse effects in socio-economically disadvantaged groups within the GMS population is also warranted. International evidence can be applied to the Irish setting to aid in future decision making in this area, but not without placing it in the local context first. Patients accepted the introduction of the 50c charge, however did voice concerns over a rising price. The challenge for policymakers is to find the ‘optimal copayment’ – whereby moral hazard is decreased, but access to essential chronic disease medicines that provide advantages at the population level is not deterred. This evidence presented in this thesis will be utilisable for future policy-making in Ireland.
Resumo:
OBJECTIVE: Strict lifelong compliance to a gluten-free diet (GFD) minimizes the long-term risk of mortality, especially from lymphoma, in adult celiac disease (CD). Although serum IgA antitransglutaminase (IgA-tTG-ab), like antiendomysium (IgA-EMA) antibodies, are sensitive and specific screening tests for untreated CD, their reliability as predictors of strict compliance to and dietary transgressions from a GFD is not precisely known. We aimed to address this question in consecutively treated adult celiacs. METHODS: In a cross-sectional study, 95 non-IgA deficient adult (median age: 41 yr) celiacs on a GFD for at least 1 yr (median: 6 yr) were subjected to 1) a dietician-administered inquiry to pinpoint and quantify the number and levels of transgressions (classified as moderate or large, using as a cutoff value the median gluten amount ingested in the overall noncompliant patients of the series) over the previous 2 months, 2) a search for IgA-tTG-ab and -EMA, and 3) perendoscopic duodenal biopsies. The ability of both antibodies to discriminate celiacs with and without detected transgressions was described using receiver operating characteristic curves and quantified as to sensitivity and specificity, according to the level of transgressions. RESULTS: Forty (42%) patients strictly adhered to a GFD, 55 (58%) had committed transgressions, classified as moderate (< or = 18 g of gluten/2 months; median number 6) in 27 and large (>18 g; median number 69) in 28. IgA-tTG-ab and -EMA specificity (proportion of correct recognition of strictly compliant celiacs) was 0.97 and 0.98, respectively, and sensitivity (proportion of correct recognition of overall, moderate, and large levels of transgressions) was 0.52, 0.31, and 0.77, and 0.62, 0.37, and 0.86, respectively. IgA-tTG-ab and -EMA titers were correlated (p < 0.001) to transgression levels (r = 0.560 and R = 0.631, respectively) and one to another (p < 0.001) in the whole patient population (r = 0.834, N = 84) as in the noncompliant (r = 0.915, N = 48) group. Specificity and sensitivity of IgA-tTG-ab and IgA-EMA for recognition of total villous atrophy in patients under a GFD were 0.90 and 0.91, and 0.60 and 0.73, respectively. CONCLUSIONS: In adult CD patients on a GFD, IgA-tTG-ab are poor predictors of dietary transgressions. Their negativity is a falsely secure marker of strict diet compliance.
Resumo:
BACKGROUND: To collect oncologists' experience and opinion on adjuvant chemotherapy in elderly breast cancer patients. MATERIALS AND METHODS: A questionnaire was circulated among the members of the Breast International Group. RESULTS: A total of 277 oncologists from 28 countries participated in the survey. Seventy years is the age cut-off commonly used to define a patient as elderly. Biological age and the biological characteristics of the tumor are the most frequently used criteria to propose adjuvant chemotherapy to an elderly patient. Combination therapy with cyclophosphamide, methotrexate and fluorouracil on days 1 and 8 is the most frequently prescribed regimen. Great interest exists in oral chemotherapy. CONCLUSION: There is interest among those who responded to the survey to validate a comprehensive geriatric assessment for use as a predictive instrument of toxicity and/or activity of anticancer therapy and to evaluate the role of a treatment option that is potentially less toxic and possibly as effective as polychemotherapy.
Resumo:
PURPOSE: To compare health-related quality of life (HRQOL) in patients with metastatic breast cancer receiving the combination of doxorubicin and paclitaxel (AT) or doxorubicin and cyclophosphamide (AC) as first-line chemotherapy treatment. PATIENTS AND METHODS: Eligible patients (n = 275) with anthracycline-naive measurable metastatic breast cancer were randomly assigned to AT (doxorubicin 60 mg/m(2) as an intravenous bolus plus paclitaxel 175 mg/m(2) as a 3-hour infusion) or AC (doxorubicin 60 mg/m(2) plus cyclophosphamide 600 mg/m(2)) every 3 weeks for a maximum of six cycles. Dose escalation of paclitaxel (200 mg/m(2)) and cyclophosphamide (750 mg/m(2)) was planned at cycle 2 to reach equivalent myelosuppression in the two groups. HRQOL was assessed with the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire C30 and the EORTC Breast Module at baseline and the start of cycles 2, 4, and 6, and 3 months after the last cycle. RESULTS: Seventy-nine percent of the patients (n = 219) completed a baseline measure. However, there were no statistically significant differences in HRQOL between the two treatment groups. In both groups, selected aspects of HRQOL were impaired over time, with increased fatigue, although some clinically significant improvements in emotional functioning were seen, as well as a reduction in pain over time. Overall, global quality of life was maintained in both treatment groups. CONCLUSION: This information is important when advising women patients of the expected HRQOL consequences of treatment regimens and should help clinicians and their patients make informed treatment decisions.
Resumo:
PURPOSE: The association of continuous infusion 5-fluorouracil, epirubicin (50 mg/m2 q 3 weeks) and a platinum compound (cisplatin or carboplatin) was found to be very active in patients with either locally advanced/inflammatory (LA/I) [1, 2] or large operable (LO) breast cancer (BC) [3]. The same rate of activity in terms of response rate (RR) and response duration was observed in LA/I BC patients when cisplatin was replaced by cyclophosphamide [4]. The dose of epirubicin was either 50 mg/m2 [ 1, 2, 3] or 60 mg/m2/cycle [4]. The main objective of this study was to determine the maximum tolerated dose (MTD) of epirubicin when given in combination with fixed doses of cyclophosphamide and infusional 5-fluorouracil (CEF-infu) as neoadjuvant therapy in patients with LO or LA/I BC for a maximum of 6 cycles. PATIENTS AND METHODS: Eligible patients had LO or LA/I BC, a performance status 0-1, adequate organ function and were <65 years old. Cyclophosphamide was administered at the dose of 400 mg/m2 day 1 and 8, q 4 weeks and infusional 5-fluorouracil 200 mg/m2/day was given day 1-28, q 4 weeks. Epirubicin was escalated from 30 to 45 and to 60 mg/m2 day 1 and 8; dose escalation was permitted if 0/3 or 1/6 patients experienced dose limiting toxicity (DLT) during the first 2 cycles of therapy. DLT for epirubicin was defined as febrile neutropenia, grade 4 neutropenia lasting for >7 days, grade 4 thrombocytopenia, or any non-haematological toxicity of CTC grade > or =3, excluding alopecia and plantar-palmar erythrodysesthesia (this toxicity was attributable to infusional 5-fluorouracil and was not considered a DLT of epirubicin). RESULTS: A total of 21 patients, median age 44 years (range 29-63) have been treated. 107 courses have been delivered, with a median number of 5 cycles per patient (range 4-6). DLTs on cycles I and 2 on level 1, 2, 3: grade 3 (G3) mucositis occurred in 1/10 patients treated at the third dose level. An interim analysis showed that G3 PPE occurred in 5/16 pts treated with the 28-day infusional 5-FU schedule at the 3 dose levels. The protocol was subsequently amended to limit the duration of infusional 5-fluorouracil infusion from 4 to 3 weeks. No G3 PPE was detected in 5 patients treated with this new schedule. CONCLUSIONS: This study establishes that epirubicin 60mg/m2 day 1 and 8, cyclophosphamide 400mg/m2 day 1 and 8 and infusional 5-fluorouracil 200 mg/m2/day day 1-21. q 4 weeks is the recommended dose level. Given the encouraging activity of this regimen (15/21 clinical responses) we have replaced infusional 5-fluorouracil by oral capecitabine in a recently activated study.