63 resultados para Measure Vertebral Rotation
Resumo:
As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
A model of the dynamics and thermodynamics of a plume of meltwater at the base of an ice shelf is presented. Such ice shelf water plumes may become supercooled and deposit marine ice if they rise (because of the pressure decrease in the in situ freezing temperature), so the model incorporates both melting and freezing at the ice shelf base and a multiple-size-class model of frazil ice dynamics and deposition. The plume is considered in two horizontal dimensions, so the influence of Coriolis forces is incorporated for the first time. It is found that rotation is extremely influential, with simulated plumes flowing in near-geostrophy because of the low friction at a smooth ice shelf base. As a result, an ice shelf water plume will only rise and become supercooled (and thus deposit marine ice) if it is constrained to flow upslope by topography. This result agrees with the observed distribution of marine ice under Filchner–Ronne Ice Shelf, Antarctica. In addition, it is found that the model only produces reasonable marine ice formation rates when an accurate ice shelf draft is used, implying that the characteristics of real ice shelf water plumes can only be captured using models with both rotation and a realistic topography.
Resumo:
Purpose – The development of marketing strategies optimally adjusted to export markets has been a vitally important topic for both managers and academics for about five decades. However, there is no agreement in the literature about which elements integrate marketing strategy and which components of domestic strategies should be adapted to export markets. The purpose of this paper is to develop a new scale – STRATADAPT. Design/methodology/approach – Results from a sample of small and medium-sized industrial exporting firms support a four-dimensional scale – product, promotion, price, and distribution strategies – of 30 items. The scale presents evidence of composite reliability as well as discriminant and nomological validity. Findings – Findings reveal that all four dimensions of marketing strategy adaptation are positively associated with the amount of the firm's financial resources allocated to export activity. Practical implications – The STRATADAPT scale may assist managers in developing better international marketing strategies as well as in planning more accurate and efficient marketing programs across markets. Originality/value – This study develops a new scale, the STRATADAPT scale, which is a broad measure of export marketing strategy adaptation.
Resumo:
An alternative procedure to that of Lo is proposed for assessing whether there is significant evidence of persistence in time series. The technique estimates the Hurst exponent itself, and significance testing is based on an application of bootstrapping using surrogate data. The method is applied to a set of 10 daily pound exchange rates. A general lack of long-term memory is found to characterize all the series tested, in sympathy with the findings of a number of other recent papers which have used Lo's techniques.
Resumo:
Coronal mass ejections (CMEs) can be continuously tracked through a large portion of the inner heliosphere by direct imaging in visible and radio wavebands. White light (WL) signatures of solar wind transients, such as CMEs, result from Thomson scattering of sunlight by free electrons and therefore depend on both viewing geometry and electron density. The Faraday rotation (FR) of radio waves from extragalactic pulsars and quasars, which arises due to the presence of such solar wind features, depends on the line-of-sight magnetic field component B ∥ and the electron density. To understand coordinated WL and FR observations of CMEs, we perform forward magnetohydrodynamic modeling of an Earth-directed shock and synthesize the signatures that would be remotely sensed at a number of widely distributed vantage points in the inner heliosphere. Removal of the background solar wind contribution reveals the shock-associated enhancements in WL and FR. While the efficiency of Thomson scattering depends on scattering angle, WL radiance I decreases with heliocentric distance r roughly according to the expression Ir –3. The sheath region downstream of the Earth-directed shock is well viewed from the L4 and L5 Lagrangian points, demonstrating the benefits of these points in terms of space weather forecasting. The spatial position of the main scattering site r sheath and the mass of plasma at that position M sheath can be inferred from the polarization of the shock-associated enhancement in WL radiance. From the FR measurements, the local B ∥sheath at r sheath can then be estimated. Simultaneous observations in polarized WL and FR can not only be used to detect CMEs, but also to diagnose their plasma and magnetic field properties.
Resumo:
Aim To develop a brief, parent-completed instrument (‘ERIC’) for detection of cognitive delay in 10-24 month-olds born preterm, or with low birth weight, or with perinatal complications, and to establish its diagnostic properties. Method Scores were collected from parents of 317 children meeting ≥1 inclusion criteria (birth weight <1500g; gestational age <34 completed weeks; 5-minute Apgar <7; presence of hypoxic-ischemic encephalopathy) and meeting no exclusion criteria. Children were assessed for cognitive delay using a criterion score on the Bayley Scales of Infant and Toddler Development Cognitive Scale III1 <80. Items were retained according to their individual associations with delay. Sensitivity, specificity, Positive and Negative Predictive Values were estimated and a truncated ERIC was developed for use <14 months. Results ERIC detected 17 out of 18 delayed children in the sample, with 94.4% sensitivity (95% CI [confidence interval] 83.9-100%), 76.9% specificity (72.1-81.7%), 19.8% positive predictive value (11.4-28.2%); 99.6% negative predictive value (98.7-100%); 4.09 likelihood ratio positive; and 0.07 likelihood ratio negative; the associated Area under the Curve was .909 (.829-.960). Interpretation ERIC has potential value as a quickly-administered diagnostic instrument for the absence of early cognitive delay in preterm or premature infants of 10-24 months, and as a screen for cognitive delay. Further research may be needed before ERIC can be recommended for wide-scale use.
Resumo:
Recent gravity missions have produced a dramatic improvement in our ability to measure the ocean’s mean dynamic topography (MDT) from space. To fully exploit this oceanic observation, however, we must quantify its error. To establish a baseline, we first assess the error budget for an MDT calculated using a 3rd generation GOCE geoid and the CLS01 mean sea surface (MSS). With these products, we can resolve MDT spatial scales down to 250 km with an accuracy of 1.7 cm, with the MSS and geoid making similar contributions to the total error. For spatial scales within the range 133–250 km the error is 3.0 cm, with the geoid making the greatest contribution. For the smallest resolvable spatial scales (80–133 km) the total error is 16.4 cm, with geoid error accounting for almost all of this. Relative to this baseline, the most recent versions of the geoid and MSS fields reduce the long and short-wavelength errors by 0.9 and 3.2 cm, respectively, but they have little impact in the medium-wavelength band. The newer MSS is responsible for most of the long-wavelength improvement, while for the short-wavelength component it is the geoid. We find that while the formal geoid errors have reasonable global mean values they fail capture the regional variations in error magnitude, which depend on the steepness of the sea floor topography.
Resumo:
Purpose – This paper aims to address the gaps in service recovery strategy assessment. An effective service recovery strategy that prevents customer defection after a service failure is a powerful managerial instrument. The literature to date does not present a comprehensive assessment of service recovery strategy. It also lacks a clear picture of the service recovery actions at managers’ disposal in case of failure and the effectiveness of individual strategies on customer outcomes. Design/methodology/approach – Based on service recovery theory, this paper proposes a formative index of service recovery strategy and empirically validates this measure using partial least-squares path modelling with survey data from 437 complainants in the telecommunications industry in Egypt. Findings – The CURE scale (CUstomer REcovery scale) presents evidence of reliability as well as convergent, discriminant and nomological validity. Findings also reveal that problem-solving, speed of response, effort, facilitation and apology are the actions that have an impact on the customer’s satisfaction with service recovery. Practical implications – This new formative index is of potential value in investigating links between strategy and customer evaluations of service by helping managers identify which actions contribute most to changes in the overall service recovery strategy as well as satisfaction with service recovery. Ultimately, the CURE scale facilitates the long-term planning of effective complaint management. Originality/value – This is the first study in the service marketing literature to propose a comprehensive assessment of service recovery strategy and clearly identify the service recovery actions that contribute most to changes in the overall service recovery strategy.
Resumo:
In this article, the authors develop a new measurement scale (the RELQUAL scale) to assess the degree of relationship quality between the exporting firm and the importer. Relationship quality is presented as a high-order concept. Findings reveal that a better quality of the relationship results in a greater (1) amount of information sharing, (2) communication quality, (3) long-term orientation, as well as (4) satisfaction with the relationship. The four multi-item scales show strong evidence of reliability as well as convergent, discriminant and nomological validity in a sample of British exporters. Findings also reveal that relationship quality is positively and significantly associated with export performance. Suggestions for applying the measure in future research are presented.
Resumo:
This article is a direct response to a recent observation in the literature that managers appear to be short-term oriented in their assessment of the performance of an export venture (Madsen 1998). On the basis of a cross-national survey of exporting firms, the authors present a three-dimensional scale for assessing managerial judgment of short-term export performance (i.e., the STEP scale). The three dimensions are (1) satisfaction with short-term performance improvement, (2) short-term exporting intensity improvement, and (3) expected short-term performance improvement. The scale presents evidence of reliability as well as convergent, discriminant, and nomological validity, and it reveals factorial similarity and factorial equivalence across both samples. The authors outline managerial and public policy implications that stem from the scale and identify avenues for further export marketing research.
Resumo:
This paper describes the methodology used to compile a corpus called MorphoQuantics that contains a comprehensive set of 17,943 complex word types extracted from the spoken component of the British National Corpus (BNC). The categorisation of these complex words was derived primarily from the classification of Prefixes, Suffixes and Combining Forms proposed by Stein (2007). The MorphoQuantics corpus has been made available on a website of the same name; it lists 554 word-initial and 281 word-final morphemes in English, their etymology and meaning, and records the type and token frequencies of all the associated complex words containing these morphemes from the spoken element of the BNC, together with their Part of Speech. The results show that, although the number of word-initial affixes is nearly double that of word-final affixes, the relative number of each observed in the BNC is very similar; however, word-final affixes are more productive in that, on average, the frequency with which they attach to different bases is three times that of word-initial affixes. Finally, this paper considers how linguists, psycholinguists and psychologists may use MorphoQuantics to support their empirical work in first and second language acquisition, and clinical and educational research.
Resumo:
There is an on-going debate on the environmental effects of genetically modified crops to which this paper aims to contribute. First, data on environmental impacts of genetically modified (GM) and conventional crops are collected from peer-reviewed journals, and secondly an analysis is conducted in order to examine which crop type is less harmful for the environment. Published data on environmental impacts are measured using an array of indicators, and their analysis requires their normalisation and aggregation. Taking advantage of composite indicators literature, this paper builds composite indicators to measure the impact of GM and conventional crops in three dimensions: (1) non-target key species richness, (2) pesticide use, and (3) aggregated environmental impact. The comparison between the three composite indicators for both crop types allows us to establish not only a ranking to elucidate which crop is more convenient for the environment but the probability that one crop type outperforms the other from an environmental perspective. Results show that GM crops tend to cause lower environmental impacts than conventional crops for the analysed indicators.