358 resultados para KAM curve


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article addresses the need of an implementation mechanism for the protection of refugees’ rights. However, it is contended that the principle forms part of Customary International Law, under which it is binding on all states irrespective of whether or not they are parties to the Convention Relating to the Status of Refugees 1951 or its Protocol 1967. Since last decade, U.S and its allies have been fighting to curve terrorism which has raised many issues such as human rights violation, deportation, expulsion, extradition, rendition and many more. Pakistan has played a very critical role in War against Terrorism, particularly in reference of war in Afghanistan. Particular concern of this article is the violation of refugees’ rights in Pakistan in 2008 and 2010. This article would highlight the legislation regarding non-expulsion of Afghan refugees from Pakistan to a territory where they have well founded fear of persecution. Article is divided into three parts, the first one deals with “Principle of Non-Refoulement”, the second one deals with “exceptions to the principle” whereas the last one discusses the violation of the very principle in Pakistan with reference to Afghan refugees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines and quantifies the effect of adding polyelectrolytes to cellulose nanofibre suspensions on the gel point of cellulose nanofibre suspensions, which is the lowest solids concentration at which the suspension forms a continuous network. The lower the gel point, the faster the drainage time to produce a sheet and the higher the porosity of the final sheet formed. Two new techniques were designed to measure the dynamic compressibility and the drainability of nanocellulose–polyelectrolyte suspensions. We developed a master curve which showed that the independent variable controlling the behaviour of nanocellulose suspensions and its composite is the structure of the flocculated suspension which is best quantified as the gel point. This was independent of the type of polyelectrolyte used. At an addition level of 2 mg/g of nanofibre, a reduction in gel point over 50 % was achieved using either a high molecular weight (13 MDa) linear cationic polyacrylamide (CPAM, 40 % charge), a dendrimer polyethylenimine of high molecular weight of 750,000 Da (HPEI) or even a low molecular weight of 2000 Da (LPEI). There was no significant difference in the minimum gel point achieved, despite the difference in polyelectrolyte morphology and molecular weight. In this paper, we show that the gel point controls the flow through the fibre suspension, even when comparing fibre suspensions with solids content above the gel point. A lower gel point makes it easier for water to drain through the fibre network,reducing the pressure required to achieve a given dewatering rate and reducing the filtering time required to form a wet laid sheet. We further show that the lower gel point partially controls the structure of the wet laid sheet after it is dried. Halving the gel point increased the air permeability of the dry sheet by 37, 46 and 25 %, when using CPAM, HPEI and LPEI, respectively. The resistance to liquid flow was reduced by 74 and 90 %, when using CPAM and LPEI. Analysing the paper formed shows that sheet forming process and final sheet properties can be engineered and controlled by adding polyelectrolytes to the nanofibre suspension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a medical negligence context, and under the causation provisions enacted pursuant to Civil Liability Legislation in most Australian jurisdictions, the normative concept of “scope of liability” requires a consideration of whether or not and why a medical practitioner should be responsible for a patient’s harm. As such, it places a limit on the extent to which practitioners are deemed liable for a breach of the duty of care owed by them, in circumstances where a legal factual connection between that breach and the causation of a patient’s harm has already been shown. It has been said that a determination of causation requires ‘the identification and articulation of an evaluative judgement by reference to “the purposes and policy of the relevant part of the law”’: Wallace v Kam (2013) 297 ALR 383, 388. Accordingly, one of the normative factors falling within scope of liability is an examination of the content and purpose of the rule or duty of care violated – that is, its underlying policy and whether this supports an attribution of legal responsibility upon a practitioner. In this context, and with reference to recent jurisprudence, this paper considers: the policy relevant to a practitioner’s duty of care in each of the areas of diagnosis, treatment and advice; how this has been used to determine an appropriate scope of liability for the purpose of the causation inquiry in medical negligence claims; and whether such an approach is problematic for medical standards or decision-making.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The post-illumination pupil response (PIPR) has been quantified using four metrics, but the spectral sensitivity of only one is known; here we determine the other three. To optimize the human PIPR measurement, we determine the protocol producing the largest PIPR, the duration of the PIPR, and the metric(s) with the lowest coefficient of variation. Methods The consensual pupil light reflex (PLR) was measured with a Maxwellian view pupillometer. - Experiment 1: Spectral sensitivity of four PIPR metrics [plateau, 6 s, area under curve (AUC) early and late recovery] was determined from a criterion PIPR to a 1s pulse and fitted with Vitamin A1 nomogram (λmax = 482nm). - Experiment 2: The PLR was measured as a function of three stimulus durations (1s, 10s, 30s), five irradiances spanning low to high melanopsin excitation levels (retinal irradiance: 9.8 to 14.8 log quanta.cm-2.s-1), and two wavelengths, one with high (465nm) and one with low (637nm) melanopsin excitation. Intra and inter-individual coefficients of variation (CV) were calculated. Results The melanopsin (opn4) photopigment nomogram adequately describes the spectral sensitivity of all four PIPR metrics. The PIPR amplitude was largest with 1s short wavelength pulses (≥ 12.8 log quanta.cm-2.s-1). The plateau and 6s PIPR showed the least intra and inter-individual CV (≤ 0.2). The maximum duration of the sustained PIPR was 83.0±48.0s (mean±SD) for 1s pulses and 180.1±106.2s for 30s pulses (465nm; 14.8 log quanta.cm-2.s-1). Conclusions All current PIPR metrics provide a direct measure of the intrinsic melanopsin photoresponse. To measure progressive changes in melanopsin function in disease, we recommend that the PIPR be measured using short duration pulses (e.g., ≤ 1s) with high melanopsin excitation and analyzed with plateau and/or 6s metrics. Our PIPR duration data provide a baseline for the selection of inter-stimulus intervals between consecutive pupil testing sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The post-illumination pupil response (PIPR) has been quantified in the literature by four metrics. The spectral sensitivity of only one metric is known and this study quantifies the other three. To optimize the measurement of the PIPR in humans, we also determine the stimulus protocol producing the largest PIPR, the duration of the PIPR, and the metric(s) with the lowest coefficient of variation. Methods The consensual pupil light reflex (PLR) was measured with a Maxwellian view pupillometer (35.6° diameter stimulus). - Experiment 1: Spectral sensitivity of four PIPR metrics [plateau, 6 s, area under curve (AUC) early and late recovery] was determined from a criterion PIPR (n = 2 participants) to a 1 s pulse at five wavelengths (409-592nm) and fitted with Vitamin A nomogram (ƛmax = 482 nm). - Experiment 2: The PLR was measured in five healthy participants [29 to 42 years (mean = 32.6 years)] as a function of three stimulus durations (1 s, 10 s, 30 s), five irradiances spanning low to high melanopsin excitation levels (retinal irradiance: 9.8 to 14.8 log quanta.cm-2.s-1), and two wavelengths, one with high (465 nm) and one with low (637 nm) melanopsin excitation. Intra and inter-individual coefficients of variation (CV) were calculated. Results The melanopsin (opn4) photopigment nomogram adequately described the spectral sensitivity derived from all four PIPR metrics. The largest PIPR amplitude was observed with 1 s short wavelength pulses (retinal irradiance ≥ 12.8 log quanta.cm-2.s-1). Of the 4 PIPR metrics, the plateau and 6 s PIPR showed the least intra and inter-individual CV (≤ 0.2). The maximum duration of the sustained PIPR was 83.4 ± 48.0 s (mean ± SD) for 1 s pulses and 180.1 ± 106.2 s for 30 s pulses (465 nm; 14.8 log quanta.cm-2.s-1). Conclusions All current PIPR metrics provide a direct measure of intrinsic melanopsin retinal ganglion cell function. To measure progressive changes in melanopsin function in disease, we recommend that the intrinsic melanopsin response should be measured using a 1 s pulse with high melanopsin excitation and the PIPR should be analyzed with the plateau and/or 6 s metrics. That the PIPR can have a sustained constriction for as long as 3 minutes, our PIPR duration data provide a baseline for the selection of inter-stimulus intervals between consecutive pupil testing sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Travel speed is one of the most critical parameters for road safety; the evidence suggests that increased vehicle speed is associated with higher crash risk and injury severity. Both naturalistic and simulator studies have reported that drivers distracted by a mobile phone select a lower driving speed. Speed decrements have been argued to be a risk compensatory behaviour of distracted drivers. Nonetheless, the extent and circumstances of the speed change among distracted drivers are still not known very well. As such, the primary objective of this study was to investigate patterns of speed variation in relation to contextual factors and distraction. Using the CARRS-Q high-fidelity Advanced Driving Simulator, the speed selection behaviour of 32 drivers aged 18-26 years was examined in two phone conditions: baseline (no phone conversation) and handheld phone operation. The simulator driving route contained five different types of road traffic complexities, including one road section with a horizontal S curve, one horizontal S curve with adjacent traffic, one straight segment of suburban road without traffic, one straight segment of suburban road with traffic interactions, and one road segment in a city environment. Speed deviations from the posted speed limit were analysed using Ward’s Hierarchical Clustering method to identify the effects of road traffic environment and cognitive distraction. The speed deviations along curved road sections formed two different clusters for the two phone conditions, implying that distracted drivers adopt a different strategy for selecting driving speed in a complex driving situation. In particular, distracted drivers selected a lower speed while driving along a horizontal curve. The speed deviation along the city road segment and other straight road segments grouped into a different cluster, and the deviations were not significantly different across phone conditions, suggesting a negligible effect of distraction on speed selection along these road sections. Future research should focus on developing a risk compensation model to explain the relationship between road traffic complexity and distraction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanical properties of arterial walls have long been recognized to play an essential role in the development and progression of cardiovascular disease (CVD). Early detection of variations in the elastic modulus of arteries would help in monitoring patients at high cardiovascular risk stratifying them according to risk. An in vivo, non-invasive, high resolution MR-phase-contrast based method for the estimation of the time-dependent elastic modulus of healthy arteries was developed, validated in vitro by means of a thin walled silicon rubber tube integrated into an existing MR-compatible flow simulator and used on healthy volunteers. A comparison of the elastic modulus of the silicon tube measured from the MRI-based technique with direct measurements confirmed the method's capability. The repeatability of the method was assessed. Viscoelastic and inertial effects characterizing the dynamic response of arteries in vivo emerged from the comparison of the pressure waveform and the area variation curve over a period. For all the volunteers who took part in the study the elastic modulus was found to be in the range 50-250 kPa, to increase during the rising part of the cycle, and to decrease with decreasing pressure during the downstroke of systole and subsequent diastole.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing accurate and reliable crop detection algorithms is an important step for harvesting automation in horticulture. This paper presents a novel approach to visual detection of highly-occluded fruits. We use a conditional random field (CRF) on multi-spectral image data (colour and Near-Infrared Reflectance, NIR) to model two classes: crop and background. To describe these two classes, we explore a range of visual-texture features including local binary pattern, histogram of oriented gradients, and learn auto-encoder features. The pro-posed methods are evaluated using hand-labelled images from a dataset captured on a commercial capsicum farm. Experimental results are presented, and performance is evaluated in terms of the Area Under the Curve (AUC) of the precision-recall curves.Our current results achieve a maximum performance of 0.81AUC when combining all of the texture features in conjunction with colour information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.