111 resultados para Data dissemination and sharing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: to assess the accuracy of data linkage across the spectrum of emergency care in the absence of a unique patient identifier, and to use the linked data to examine service delivery outcomes in an emergency department setting. Design: automated data linkage and manual data linkage were compared to determine their relative accuracy. Data were extracted from three separate health information systems: ambulance, ED and hospital inpatients, then linked to provide information about the emergency journey of each patient. The linking was done manually through physical review of records and automatically using a data linking tool (Health Data Integration) developed by the CSIRO. Match rate and quality of the linking were compared. Setting: 10, 835 patient presentations to a large, regional teaching hospital ED over a two month period (August-September 2007). Results: comparison of the manual and automated linkage outcomes for each pair of linked datasets demonstrated a sensitivity of between 95% and 99%; a specificity of between 75% and 99%; and a positive predictive value of between 88% and 95%. Conclusions: Our results indicate that automated linking provides a sound basis for health service analysis, even in the absence of a unique patient identifier. The use of an automated linking tool yields accurate data suitable for planning and service delivery purposes and enables the data to be linked regularly to examine service delivery outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring and assessing environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods of time. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data effectively and efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; collaboration, manual, automatic and human-in-the loop analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Participatory sensing enables collection, processing, dissemination and analysis of environmental sensory data by ordinary citizens, through mobile devices. Researchers have recognized the potential of participatory sensing and attempted applying it to many areas. However, participants may submit low quality, misleading, inaccurate, or even malicious data. Therefore, finding a way to improve the data quality has become a significant issue. This study proposes using reputation management to classify the gathered data and provide useful information for campaign organizers and data analysts to facilitate their decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several authors stress the importance of data’s crucial foundation for operational, tactical and strategic decisions (e.g., Redman 1998, Tee et al. 2007). Data provides the basis for decision making as data collection and processing is typically associated with reducing uncertainty in order to make more effective decisions (Daft and Lengel 1986). While the first series of investments of Information Systems/Information Technology (IS/IT) into organizations improved data collection, restricted computational capacity and limited processing power created challenges (Simon 1960). Fifty years on, capacity and processing problems are increasingly less relevant; in fact, the opposite exists. Determining data relevance and usefulness is complicated by increased data capture and storage capacity, as well as continual improvements in information processing capability. As the IT landscape changes, businesses are inundated with ever-increasing volumes of data from both internal and external sources available on both an ad-hoc and real-time basis. More data, however, does not necessarily translate into more effective and efficient organizations, nor does it increase the likelihood of better or timelier decisions. This raises questions about what data managers require to assist their decision making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of social web initiatives, some argued that these new emerging tools might be useful in tacit knowledge sharing through providing interactive and collaborative technologies. However, there is still a poverty of literature to understand how and what might be the contributions of social media in facilitating tacit knowledge sharing. Therefore, this paper is intended to theoretically investigate and map social media concepts and characteristics with tacit knowledge creation and sharing requirements. By conducting a systematic literature review, five major requirements found that need to be present in an environment that involves tacit knowledge sharing. These requirements have been analyzed against social media concepts and characteristics to see how they map together. The results showed that social media have abilities to comply some of the main requirements of tacit knowledge sharing. The relationships have been illustrated in a conceptual framework, suggesting further empirical studies to acknowledge findings of this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; online collaboration, manual, automatic and human-in-the loop analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the design and study of public urban screen applications aiming to facilitate urban dwellers to control content shown on public urban screens. Two types of content sharing are presented: aggregating existing social media content about particular locations for sharing, and sharing online videos with collocated people at a public urban screen. The paper describes an exploratory study, an observational study, as well as an interpretational study in regards to application usage and user experience. Sharing content on public urban screens can pique the curiosity of users towards collocated people and the application itself resulting in raised awareness of collocated people.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Breast cancer is a leading contributor to the burden of disease in Australia. Fortunately, the recent introduction of diverse therapeutic strategies have improved the survival outcome for many women. Despite this, the clinical management of breast cancer remains problematic as not all approaches are sufficiently sophisticated to take into account the heterogeneity of this disease and are unable to predict disease progression, in particular, metastasis. As such, women with good prognostic outcomes are exposed to the side effects of therapies without added benefit. Furthermore, women with aggressive disease for whom these advanced treatments would deliver benefit cannot be distinguished and opportunities for more intensive or novel treatment are lost. This study is designed to identify novel factors associated with disease progression, and the potential to inform disease prognosis. Frequently overlooked, yet common mediators of disease are the interactions that take place between the insulin-like growth factor (IGF) system and the extracellular matrix (ECM). Our laboratory has previously demonstrated that multiprotein insulin-like growth factor-I (IGF-I): insulin-like growth factor binding protein (IGFBP): vitronectin (VN) complexes stimulate migration of breast cancer cells in vitro, via the cooperative involvement of the insulin-like growth factor type I receptor (IGF-IR) and VN-binding integrins. However, the effects of IGF and ECM protein interactions on the dissemination and progression of breast cancer in vivo are unknown. It was hypothesised that interactions between proteins required for IGF induced signalling events and those within the ECM contribute to breast cancer metastasis and are prognostic and predictive indicators of patient outcome. To address this hypothesis, semiquantitative immunohistochemistry (IHC) analyses were performed to compare the extracellular and subcellular distribution of IGF and ECM induced signalling proteins between matched normal, primary cancer, and metastatic cancer among archival formalin-fixed paraffin-embedded (FFPE) breast tissue samples collected from women attending the Princess Alexandra Hospital, Brisbane. Multivariate Cox proportional hazards (PH) regression survival models in conjunction with a modified „purposeful selection of covariates. method were applied to determine the prognostic potential of these proteins. This study provides the first in-depth, compartmentalised analysis of the distribution of IGF and ECM induced signalling proteins. As protein function and protein localisation are closely correlated, these findings provide novel insights into IGF signalling and ECM protein function during breast cancer development and progression. Distinct IGF signalling and ECM protein immunoreactivity was observed in the stroma and/or in subcellular locations in normal breast, primary cancer and metastatic cancer tissues. Analysis of the presence and location of stratifin (SFN) suggested a causal relationship in ECM remodelling events during breast cancer development and progression. The results of this study have also suggested that fibronectin (FN) and ¥â1 integrin are important for the formation of invadopodia and epithelial-to-mesenchymal transition (EMT) events. Our data also highlighted the importance of the temporal and spatial distribution of IGF induced signalling proteins in breast cancer metastasis; in particular, SFN, enhancer-of-split and hairy-related protein 2 (SHARP-2), total-akt/protein kinase B 1 (Total-AKT1), phosphorylated-akt/protein kinase B (P-AKT), extracellular signal-related kinase-1 and extracellular signal-related kinase-2 (ERK1/2) and phosphorylated-extracellular signal-related kinase-1 and extracellular signal-related kinase-2 (P-ERK1/2). Multivariate survival models were created from the immunohistochemical data. These models were found to fit well with these data with very high statistical confidence. Numerous prognostic confounding effects and effect modifications were identified among elements of the ECM and IGF signalling cascade and corroborate the survival models. This finding provides further evidence for the prognostic potential of IGF and ECM induced signalling proteins. In addition, the adjusted measures of associations obtained in this study have strengthened the validity and utility of the resulting models. The findings from this study provide insights into the biological interactions that occur during the development of breast tissue and contribute to disease progression. Importantly, these multivariate survival models could provide important prognostic and predictive indicators that assist the clinical management of breast disease, namely in the early identification of cancers with a propensity to metastasise, and/or recur following adjuvant therapy. The outcomes of this study further inform the development of new therapeutics to aid patient recovery. The findings from this study have widespread clinical application in the diagnosis of disease and prognosis of disease progression, and inform the most appropriate clinical management of individuals with breast cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tacit knowledge sharing amongst physicians, such as the sharing of clinical experiences, skills, or know-how, or know-whom, is known to have a significant impact on the quality of medical diagnosis and decisions. This paper posits that social media can provide new opportunities for tacit knowledge sharing amongst physicians, and demonstrates this by presenting findings from a review of relevant literature and a survey conducted with physicians. Semi-structured interviews were conducted with ten physicians from around the world who were active users of social media. Initial thematic analysis revealed eight themes as potential contributions of social web tools to facilitate tacit knowledge flow amongst physicians. The emergent themes are defined, linked to the literature, and supported by instances of interview transcripts. Findings presented here are preliminary, and final results will be reported after accomplishing all phases of data collection and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intelligent Transport Systems (ITS) resembles the infrastructure for ubiquitous computing in the car. It encompasses a) all kinds of sensing technologies within vehicles as well as road infrastructure, b) wireless communication protocols for the sensed information to be exchanged between vehicles (V2V) and between vehicles and infrastructure (V2I), and c) appropriate intelligent algorithms and computational technologies that process these real-time streams of information. As such, ITS can be considered a game changer. It provides the fundamental basis of new, innovative concepts and applications, similar to the Internet itself. The information sensed or gathered within or around the vehicle has led to a variety of context-aware in-vehicular technologies within the car. A simple example is the Anti-lock Breaking System (ABS), which releases the breaks when sensors detect that the wheels are locked. We refer to this type of context awareness as vehicle/technology awareness. V2V and V2I communication, often summarized as V2X, enables the exchange and sharing of sensed information amongst cars. As a result, the vehicle/technology awareness horizon of each individual car is expanded beyond its observable surrounding, paving the way to technologically enhance such already advanced systems. In this chapter, we draw attention to those application areas of sensing and V2X technologies, where the human (driver), the human’s behavior and hence the psychological perspective plays a more pivotal role. The focal points of our project are illustrated in Figure 1: In all areas, the vehicle first (1) gathers or senses information about the driver. Rather than to limit the use of such information towards vehicle/technology awareness, we see great potential for applications in which this sensed information is then (2) fed back to the driver for an increased self-awareness. In addition, by using V2V technologies, it can also be (3) passed to surrounding drivers for an increased social awareness, or (4), pushed even further, into the cloud, where it is collected and visualized for an increased, collective urban awareness within the urban community at large, which includes all city dwellers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High tumor kallikrein-related-peptidase 4 (KLK4) levels are associated with a poor outcome for women with serous epithelial ovarian cancer (EOC), for which peritoneal dissemination and chemoresistance are key events. To determine the role of KLK4 in these events, we examined KLK4-transfected SKOV-3 and endogenous KLK4 expressing OVCA432 cells in 3-dimensional (3D) suspension culture to mimic the ascites microenvironment. KLK4-SKOV-3 cells formed multicellular aggregates (MCAs) as seen in ascites, as did SKOV-3 cells treated with active KLK4. MCA formation was reduced by treatment with a KLK4 blocking antibody or the selective active site KLK4 sunflower trypsin inhibitor (SFTI-FCQR). KLK4-MCAs formed larger cancer cell foci in mesothelial cell monolayers than those formed by vector and native SKOV-3 cells, suggesting KLK4-MCAs are highly invasive in the peritoneal microenvironment. A high level of KLK4 is expressed by ascitic EOC cells compared to matched primary tumor cells, further supporting its role in the ascitic microenvironment. Interestingly, KLK4 transfected SKOV-3 cells expressed high levels of the KLK4 substrate, urokinase plasminogen activator (uPA), particularly in 3D-suspension, and high levels of both KLK4 and uPA were observed in patient cells taken from ascites. Importantly, the KLK4-MCAs were paclitaxel resistant which was reversed by SFTI-FCQR and to a lesser degree by the general serine protease inhibitor, Aprotinin, suggesting that in addition to uPA, other as yet unidentified substrates of KLK4 must be involved. Nonetheless, these data suggest that KLK4 inhibition, in conjunction with paclitaxel, may improve the outcome for women with serous epithelial ovarian cancer and high KLK4 levels in their tumors.