920 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Few studies have investigated iatrogenic outcomes from the viewpoint of patient experience. To address this anomaly, the broad aim of this research is to explore the lived experience of patient harm. Patient harm is defined as major harm to the patient, either psychosocial or physical in nature, resulting from any aspect of health care. Utilising the method of Consensual Qualitative Research (CQR), in-depth interviews are conducted with twenty-four volunteer research participants who self-report having been severely harmed by an invasive medical procedure. A standardised measure of emotional distress, the Impact of Event Scale (IES), is additionally employed for purposes of triangulation. Thematic analysis of transcript data indicate numerous findings including: (i) difficulties regarding patients‘ prior understanding of risks involved with their medical procedure; (ii) the problematic response of the health system post-procedure; (iii) multiple adverse effects upon life functioning; (iv) limited recourse options for patients; and (v) the approach desired in terms of how patient harm should be systemically handled. In addition, IES results indicate a clinically significant level of distress in the sample as a whole. To discuss findings, a cross-disciplinary approach is adopted that draws upon sociology, medicine, medical anthropology, psychology, philosophy, history, ethics, law, and political theory. Furthermore, an overall explanatory framework is proposed in terms of the master themes of power and trauma. In terms of the theme of power, a postmodernist analysis explores the politics of patient harm, particularly the dynamics surrounding the politics of knowledge (e.g., notions of subjective versus objective knowledge, informed consent, and open disclosure). This analysis suggests that patient care is not the prime function of the health system, which appears more focussed upon serving the interests of those in the upper levels of its hierarchy. In terms of the master theme of trauma, current understandings of posttraumatic stress disorder (PTSD) are critiqued, and based on data from this research as well as the international literature, a new model of trauma is proposed. This model is based upon the principle of homeostasis observed in biology, whereby within every cell or organism a state of equilibrium is sought and maintained. The proposed model identifies several bio-psychosocial markers of trauma across its three main phases. These trauma markers include: (i) a profound sense of loss; (ii) a lack of perceived control; (iii) passive trauma processing responses; (iv) an identity crisis; (v) a quest to fully understand the trauma event; (vi) a need for social validation of the traumatic experience; and (vii) posttraumatic adaption with the possibility of positive change. To further explore the master themes of power and trauma, a natural group interview is carried out at a meeting of a patient support group for arachnoiditis. Observations at this meeting and members‘ stories in general support the homeostatic model of trauma, particularly the quest to find answers in the face of distressing experience, as well as the need for social recognition of that experience. In addition, the sociopolitical response to arachnoiditis highlights how public domains of knowledge are largely constructed and controlled by vested interests. Implications of the data overall are discussed in terms of a cultural revolution being needed in health care to position core values around a prime focus upon patients as human beings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EMR (Electronic Medical Record) is an emerging technology that is highly-blended between non-IT and IT area. One methodology is to link the non-IT and IT area is to construct databases. Nowadays, it supports before and after-treatment for patients and should satisfy all stakeholders such as practitioners, nurses, researchers, administrators and financial departments and so on. In accordance with the database maintenance, DAS (Data as Service) model is one solution for outsourcing. However, there are some scalability and strategy issues when we need to plan to use DAS model properly. We constructed three kinds of databases such as plan-text, MS built-in encryption which is in-house model and custom AES (Advanced Encryption Standard) - DAS model scaling from 5K to 2560K records. To perform custom AES-DAS better, we also devised Bucket Index using Bloom Filter. The simulation showed the response times arithmetically increased in the beginning but after a certain threshold, exponentially increased in the end. In conclusion, if the database model is close to in-house model, then vendor technology is a good way to perform and get query response times in a consistent manner. If the model is DAS model, it is easy to outsource the database, however, some techniques like Bucket Index enhances its utilization. To get faster query response times, designing database such as consideration of the field type is also important. This study suggests cloud computing would be a next DAS model to satisfy the scalability and the security issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, ‘business model’ and ‘business model innovation’ have gained substantial attention in management literature and practice. However, many firms lack the capability to develop a novel business model to capture the value from new technologies. Existing literature on business model innovation highlights the central role of ‘customer value’. Further, it suggests that firms need to experiment with different business models and engage in ‘trail-and-error’ learning when participating in business model innovation. Trial-and error processes and prototyping with tangible artifacts are a fundamental characteristic of design. This conceptual paper explores the role of design-led innovation in facilitating firms to conceive and prototype novel and meaningful business models. It provides a brief review of the conceptual discussion on business model innovation and highlights the opportunities for linking it with the research stream of design-led innovation. We propose design-led business model innovation as a future research area and highlight the role of design-led prototyping and new types of artifacts and prototypes play within it. We present six propositions in order to outline future research avenues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many continuum mechanical models have been developed such as liquid drop models, solid models, and so on for single living cell biomechanics studies. However, these models do not give a fully approach to exhibit a clear understanding of the behaviour of single living cells such as swelling behaviour, drag effect, etc. Hence, the porohyperelastic (PHE) model which can capture those aspects would be a good candidature to study cells behaviour (e.g. chondrocytes in this study). In this research, an FEM model of single chondrocyte cell will be developed by using this PHE model to simulate Atomic Force Microscopy (AFM) experimental results with the variation of strain rate. This material model will be compared with viscoelastic model to demonstrate the advantages of PHE model. The results have shown that the maximum value of force applied of PHE model is lower at lower strain rates. This is because the mobile fluid does not have enough time to exude in case of very high strain rate and also due to the lower permeability of the membrane than that of the protoplasm of chondrocyte. This behavior is barely observed in viscoelastic model. Thus, PHE model is the better model for cell biomechanics studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A juice flow model has been developed to estimate the juice expression at the four nips of a sixroller mill. An extended volumetric theory was applied to determine the juice expressed at each nip. The model was applied to a first and final mill, using typical mill settings and an empirical equation to estimate reabsorption. Results of using the model for typical heavy-duty pressure feeder settings show that most of the juice is expressed at the pressure feeder nip. Since the pressure feeders are remote from the mill, a significant portion of the juice is expressed before the bagasse enters the mill.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bangkok Metropolitan Region (BMR) is the centre for various major activities in Thailand including political, industry, agriculture, and commerce. Consequently, the BMR is the highest and most densely populated area in Thailand. Thus, the demand for houses in the BMR is also the largest, especially in subdivision developments. For these reasons, the subdivision development in the BMR has increased substantially in the past 20 years and generated large numbers of subdivision developments (AREA, 2009; Kridakorn Na Ayutthaya & Tochaiwat, 2010). However, this dramatic growth of subdivision development has caused several problems including unsustainable development, especially for subdivision neighbourhoods, in the BMR. There have been rating tools that encourage the sustainability of neighbourhood design in subdivision development, but they still have practical problems. Such rating tools do not cover the scale of the development entirely; and they concentrate more on the social and environmental conservation aspects, which have not been totally accepted by the developers (Boonprakub, 2011; Tongcumpou & Harvey, 1994). These factors strongly confirm the need for an appropriate rating tool for sustainable subdivision neighbourhood design in the BMR. To improve level of acceptance from all stakeholders in subdivision developments industry, the new rating tool should be developed based on an approach that unites the social, environmental, and economic approaches, such as eco-efficiency principle. Eco-efficiency is the sustainability indicator introduced by the World Business Council for Sustainable Development (WBCSD) since 1992. The eco-efficiency is defined as the ratio of the product or service value according to its environmental impact (Lehni & Pepper, 2000; Sorvari et al., 2009). Eco-efficiency indicator is concerned to the business, while simultaneously, is concerned with to social and the environment impact. This study aims to develop a new rating tool named "Rating for sustainable subdivision neighbourhood design (RSSND)". The RSSND methodology is developed by a combination of literature reviews, field surveys, the eco-efficiency model development, trial-and-error technique, and the tool validation process. All required data has been collected by the field surveys from July to November 2010. The ecoefficiency model is a combination of three different mathematical models; the neighbourhood property price (NPP) model, the neighbourhood development cost (NDC) model, and the neighbourhood occupancy cost (NOC) model which are attributable to the neighbourhood subdivision design. The NPP model is formulated by hedonic price model approach, while the NDC model and NOC model are formulated by the multiple regression analysis approach. The trial-and-error technique is adopted for simplifying the complex mathematic eco-efficiency model to a user-friendly rating tool format. Credibility of the RSSND has been validated by using both rated and non-rated of eight subdivisions. It is expected to meet the requirements of all stakeholders which support the social activities of the residents, maintain the environmental condition of the development and surrounding areas, and meet the economic requirements of the developers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stations on Bus Rapid Transit (BRT) lines ordinarily control line capacity because they act as bottlenecks. At stations with passing lanes, congestion may occur when buses maneuvering into and out of the platform stopping lane interfere with bus flow, or when a queue of buses forms upstream of the station blocking inflow. We contend that, as bus inflow to the station area approaches capacity, queuing will become excessive in a manner similar to operation of a minor movement on an unsignalized intersection. This analogy is used to treat BRT station operation and to analyze the relationship between station queuing and capacity. In the first of three stages, we conducted microscopic simulation modeling to study and analyze operating characteristics of the station under near steady state conditions through output variables of capacity, degree of saturation and queuing. A mathematical model was then developed to estimate the relationship between average queue and degree of saturation and calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Finally, simulation results were calibrated and validated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives Currently, there are no studies combining electromyography (EMG) and sonography to estimate the absolute and relative strength values of erector spinae (ES) muscles in healthy individuals. The purpose of this study was to establish whether the maximum voluntary contraction (MVC) of the ES during isometric contractions could be predicted from the changes in surface EMG as well as in fiber pennation and thickness as measured by sonography. Methods Thirty healthy adults performed 3 isometric extensions at 45° from the vertical to calculate the MVC force. Contractions at 33% and 100% of the MVC force were then used during sonographic and EMG recordings. These measurements were used to observe the architecture and function of the muscles during contraction. Statistical analysis was performed using bivariate regression and regression equations. Results The slope for each regression equation was statistically significant (P < .001) with R2 values of 0.837 and 0.986 for the right and left ES, respectively. The standard error estimate between the sonographic measurements and the regression-estimated pennation angles for the right and left ES were 0.10 and 0.02, respectively. Conclusions Erector spinae muscle activation can be predicted from the changes in fiber pennation during isometric contractions at 33% and 100% of the MVC force. These findings could be essential for developing a regression equation that could estimate the level of muscle activation from changes in the muscle architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For point to point multiple input multiple output systems, Dayal-Brehler-Varanasi have proved that training codes achieve the same diversity order as that of the underlying coherent space time block code (STBC) if a simple minimum mean squared error estimate of the channel formed using the training part is employed for coherent detection of the underlying STBC. In this letter, a similar strategy involving a combination of training, channel estimation and detection in conjunction with existing coherent distributed STBCs is proposed for noncoherent communication in Amplify-and-Forward (AF) relay networks. Simulation results show that the proposed simple strategy outperforms distributed differential space-time coding for AF relay networks. Finally, the proposed strategy is extended to asynchronous relay networks using orthogonal frequency division multiplexing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we give the performance of MQAM OFDM based WLAN in presence of single and multiple channels Zigbee interference. An analytical model for getting symbol error rate (SER) in presence of single and multiple channel Zigbee interference in AWGN and Rayleigh fading channel for MQAM OFDM system is given. Simulation results are compared with analytical symbol error rate (SER) of the MQAM-OFDM system. For analysis we have modeled the Zigbee interference using the power spectral density (PSD) of OQPSK modulation and finding the average interference power for each sub-carrier of the OFDM system. Then we have averaged the SER over all WLAN sub-carriers. Simulations closely match with the analytical models. It is seen from simulation and analytical results that performance of WLAN is severely affected by Zigbee interference. Symbol error rate (SER) for 16QAM and 64QAM OFDM system is of order of 10(-2) for SIR (signal to interference ratio) of 20dB and 30dB respectively in presence of single Zigbee interferer inside the WLAN frequency band for Rayleigh fading channel. For SIR values more than 30dB and 40dB the SER approaches the SER without interference for 16QAM and 64QAM OFDM system respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We deal with a single conservation law with discontinuous convex-concave type fluxes which arise while considering sign changing flux coefficients. The main difficulty is that a weak solution may not exist as the Rankine-Hugoniot condition at the interface may not be satisfied for certain choice of the initial data. We develop the concept of generalized entropy solutions for such equations by replacing the Rankine-Hugoniot condition by a generalized Rankine-Hugoniot condition. The uniqueness of solutions is shown by proving that the generalized entropy solutions form a contractive semi-group in L-1. Existence follows by showing that a Godunov type finite difference scheme converges to the generalized entropy solution. The scheme is based on solutions of the associated Riemann problem and is neither consistent nor conservative. The analysis developed here enables to treat the cases of fluxes having at most one extrema in the domain of definition completely. Numerical results reporting the performance of the scheme are presented. (C) 2006 Elsevier B.V. All rights reserved.