846 resultados para Interest points


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this review piece, we survey the literature on the cost of equity capital implications of corporate disclosure and conservative accounting policy choice decisions with the principle objective of providing insights into the design and methodological issues, which underlie the empirical investigations. We begin with a review of the analytical studies most typically cited in the empirical research as providing a theoretical foundation. We then turn to consider literature that offers insights into the selection of proxies for each of our points of interest, cost of equity capital, disclosure quality and accounting conservatism. As a final step, we review selected empirical studies to illustrate the relevant evidence found within the literature. Based on our review, we interpret the literature as providing the researcher with only limited direct guidance on the appropriate choice of measure for each of the constructs of interest. Further, we view the literature as raising questions about both the interpretation of empirical findings in the face of measurement concerns and the suitability of certain theoretical arguments to the research setting. Overall, perhaps the message which is most clear is that one of the most controversial and fundamental issues underlying the literature is the issue of the diversifiability or nondiversifiability of information effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Caveats as protection for unregistered interests - lapsing and non-lapsing caveats - caveator - use only in appropriate circumstances

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We read the excellent review of telemonitoring in chronic heart failure (CHF)1 with interest and commend the authors on the proposed classification of telemedical remote management systems according to the type of data transfer, decision ability and level of integration. However, several points require clarification in relation to our Cochrane review of telemonitoring and structured telephone support2. We included a study by Kielblock3. We corresponded directly with this study team specifically to find out whether or not this was a randomised study and were informed that it was a randomised trial, albeit by date of birth. We note in our review2 that this randomisation method carries a high risk of bias. Post-hoc metaanalyses without these data demonstrate no substantial change to the effect estimates for all cause mortality (original risk ratio (RR) 0·66 [95% CI 0·54, 0·81], p<0·0001; revised RR 0·72 [95% CI 0·57, 0·92], p=0·008), all-cause hospitalisation (original RR 0·91 [95% CI 0·84, 0·99] p=0·02; revised RR 0.92 [95% CI 0·84, 1·02], p=0·10 ) or CHF-related hospitalisation (original RR 0·79 [95% CI 0·67, 0·94] p=0·008; revised RR 0·75 [95% CI 0·60, 0·94] p=0·01). Secondly, we would classify the Tele-HF study4, 5 as structured telephone support, rather than telemonitoring. Again, inclusion of these data alters the point-estimate but not the overall result of the meta-analyses4. Finally, our review2 does not include invasive telemonitoring as the search strategy was not designed to capture these studies. Therefore direct comparison of our review findings with recent studies of these interventions is not recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The formation of hypertrophic scars is a frequent outcome of wound repair and often requires further therapy with treatments such as silicone gel sheets (SGS; Perkins et al., 1983). Although widely used, knowledge regarding SGS and their mechanism of action on hypertrophic scars is limited. Furthermore, SGS require consistent application for at least twelve hours a day for up to twelve consecutive months, beginning as soon as wound reepithelialisation has occurred. Preliminary research at QUT has shown that some species of silicone present in SGS have the ability to permeate into collagen gel skin mimetics upon exposure. An analogue of these species, GP226, was found to decrease both collagen synthesis and the total amount of collagen present following exposure to cultures of cells derived from hypertrophic scars. This silicone of interest was a crude mixture of silicone species, which resolved into five fractions of different molecular weight. These five fractions were found to have differing effects on collagen synthesis and cell viability following exposure to fibroblasts derived from hypertrophic scars (HSF), keloid scars (KF) and normal skin (nHSF and nKF). The research performed herein continues to further assess the potential of GP226 and its fractions for scar remediation by determining in more detail its effects on HSF, KF, nHSF, nKF and human keratinocytes (HK) in terms of cell viability and proliferation at various time points. Through these studies it was revealed that Fraction IV was the most active fraction as it induced a reduction in cell viability and proliferation most similar to that observed with GP226. Cells undergoing apoptosis were also detected in HSF cultures exposed to GP226 and Fraction IV using the Tunel assay (Roche). These investigations were difficult to pursue further as the fractionation process used for GP226 was labour-intensive and time inefficient. Therefore a number of silicones with similar structure to Fraction IV were synthesised and screened for their effect following application to HSF and nHSF. PDMS7-g-PEG7, a silicone-PEG copolymer of low molecular weight and low hydrophilic-lipophilic balance factor, was found to be the most effective at reducing cell proliferation and inducing apoptosis in cultures of HSF, nHSF and HK. Further studies investigated gene expression through microarray and superarray techniques and demonstrated that many genes are differentially expressed in HSF following treatment with GP226, Fraction IV and PDMS7-g-PEG7. In brief, it was demonstrated that genes for TGFβ1 and TNF are not differentially regulated while genes for AIFM2, IL8, NSMAF, SMAD7, TRAF3 and IGF2R show increased expression (>1.8 fold change) following treatment with PDMS7-g-PEG7. In addition, genes for αSMA, TRAF2, COL1A1 and COL3A1 have decreased expression (>-1.8 fold change) following treatment with GP226, Fraction IV and PDMS7-g-PEG7. The data obtained suggest that many different pathways related to apoptosis and collagen synthesis are affected in HSF following exposure to PDMS7-g-PEG7. The significance is that silicone-PEG copolymers, such as GP226, Fraction IV and PDMS7-g-PEG7, could potentially be a non-invasive substitute to apoptosis-inducing chemical agents that are currently used as scar treatments. It is anticipated that these findings will ultimately contribute to the development of a novel scar therapy with faster action and improved outcomes for patients suffering from hypertrophic scars.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Innovation processes are rarely smooth and disruptions often occur at transition points were one knowledge domain passes the technology on to another domain. At these transition points communication is a key component in assisting the smooth hand over of technologies. However for smooth transitions to occur we argue that appropriate structures have to be in place and boundary spanning activities need to be facilitated. This paper presents three case studies of innovation processes and the findings support the view that structures and boundary spanning are essential for smooth transitions. We have explained the need to pass primary responsibility between agents to successfully bring an innovation to market. We have also shown the need to combine knowledge through effective communication so that absorptive capacity is built in process throughout the organisation rather than in one or two key individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As an international norm, the Responsibility to Protect (R2P) has gained substantial influence and institutional presence—and created no small controversy—in the ten years since its first conceptualisation. Conversely, the Protection of Civilians in Armed Conflict (PoC) has a longer pedigree and enjoys a less contested reputation. Yet UN Security Council action in Libya in 2011 has thrown into sharp relief the relationship between the two. UN Security Council Resolutions 1970 and 1973 follow exactly the process envisaged by R2P in response to imminent atrocity crimes, yet the operative paragraphs of the resolutions themselves invoke only PoC. This article argues that, while the agendas of PoC and R2P converge with respect to Security Council action in cases like Libya, outside this narrow context it is important to keep the two norms distinct. Peacekeepers, humanitarian actors, international lawyers, individual states and regional organisations are required to act differently with respect to the separate agendas and contexts covered by R2P and PoC. While overlap between the two does occur in highly visible cases like Libya, neither R2P nor PoC collapses normatively, institutionally or operationally into the other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prevailing video adaptation solutions change the quality of the video uniformly throughout the whole frame in the bitrate adjustment process; while region-of-interest (ROI)-based solutions selectively retains the quality in the areas of the frame where the viewers are more likely to pay more attention to. ROI-based coding can improve perceptual quality and viewer satisfaction while trading off some bandwidth. However, there has been no comprehensive study to measure the bitrate vs. perceptual quality trade-off so far. The paper proposes an ROI detection scheme for videos, which is characterized with low computational complexity and robustness, and measures the bitrate vs. quality trade-off for ROI-based encoding using a state-of-the-art H.264/AVC encoder to justify the viability of this type of encoding method. The results from the subjective quality test reveal that ROI-based encoding achieves a significant perceptual quality improvement over the encoding with uniform quality at the cost of slightly more bits. Based on the bitrate measurements and subjective quality assessments, the bitrate and the perceptual quality estimation models for non-scalable ROI-based video coding (AVC) are developed, which are found to be similar to the models for scalable video coding (SVC).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radial Hele-Shaw flows are treated analytically using conformal mapping techniques. The geometry of interest has a doubly-connected annular region of viscous fluid surrounding an inviscid bubble that is either expanding or contracting due to a pressure difference caused by injection or suction of the inviscid fluid. The zero-surface-tension problem is ill-posed for both bubble expansion and contraction, as both scenarios involve viscous fluid displacing inviscid fluid. Exact solutions are derived by tracking the location of singularities and critical points in the analytic continuation of the mapping function. We show that by treating the critical points, it is easy to observe finite-time blow-up, and the evolution equations may be written in exact form using complex residues. We present solutions that start with cusps on one interface and end with cusps on the other, as well as solutions that have the bubble contracting to a point. For the latter solutions, the bubble approaches an ellipse in shape at extinction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blogs and other online platforms for personal writing such as LiveJournal have been of interest to researchers across the social sciences and humanities for a decade now. Although growth in the uptake of blogging has stalled somewhat since the heyday of blogs in the early 2000s, blogging continues to be a major genre of Internet-based communication. Indeed, at the same time that mass participation has moved on to Facebook, Twitter, and other more recent communication phenomena, what has been left behind by the wave of mass adoption is a slightly smaller but all the more solidly established blogosphere of engaged and committed participants. Blogs are now an accepted part of institutional, group, and personal communications strategies (Bruns and Jacobs, 2006); in style and substance, they are situated between the more static information provided by conventional Websites and Webpages and the continuous newsfeeds provided through Facebook and Twitter updates. Blogs provide a vehicle for authors (and their commenters) to think through given topics in the space of a few hundred to a few thousand words – expanding, perhaps, on shorter tweets, and possibly leading to the publication of more fully formed texts elsewhere. Additionally, they are also a very flexible medium: they readily provide the functionality to include images, audio, video, and other additional materials – as well as the fundamental tool of blogging, the hyperlink itself. Indeed, the role of the link in blogs and blog posts should not be underestimated. Whatever the genre and topic that individual bloggers engage in, for the most part blogging is used to provide timely updates and commentary – and it is typical for such material to link both to relevant posts made by other bloggers, and to previous posts by the present author, both to background material which provides readers with further information about the blogger’s current topic, and to news stories and articles which the blogger found interesting or worthy of critique. Especially where bloggers are part of a larger community of authors sharing similar interests or views (and such communities are often indicated by the presence of yet another type of link – in blogrolls, often in a sidebar on the blog site, which list the blogger’s friends or favourites), then, the reciprocal writing and linking of posts often constitutes an asynchronous, distributed conversation that unfolds over the course of days, weeks, and months. Research into blogs is interesting for a variety of reasons, therefore. For one, a qualitative analysis of one or several blogs can reveal the cognitive and communicative processes through which individual bloggers define their online identity, position themselves in relation to fellow bloggers, frame particular themes, topics and stories, and engage with one another’s points of view. It may also shed light on how such processes may differ across different communities of interest, perhaps in correlation with the different societal framing and valorisation of specific areas of interest, with the socioeconomic backgrounds of individual bloggers, or with other external or internal factors. Such qualitative research now looks back on a decade-long history (for key collections, see Gurak, et al., 2004; Bruns and Jacobs, 2006; also see Walker Rettberg, 2008) and has recently shifted also to specifically investigate how blogging practices differ across different cultures (Russell and Echchaibi, 2009). Other studies have also investigated the practices and motivations of bloggers in specific countries from a sociological perspective, through large-scale surveys (e.g. Schmidt, 2009). Blogs have also been directly employed within both K-12 and higher education, across many disciplines, as tools for reflexive learning and discussion (Burgess, 2006).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Volume measurements are useful in many branches of science and medicine. They are usually accomplished by acquiring a sequence of cross sectional images through the object using an appropriate scanning modality, for example x-ray computed tomography (CT), magnetic resonance (MR) or ultrasound (US). In the cases of CT and MR, a dividing cubes algorithm can be used to describe the surface as a triangle mesh. However, such algorithms are not suitable for US data, especially when the image sequence is multiplanar (as it usually is). This problem may be overcome by manually tracing regions of interest (ROIs) on the registered multiplanar images and connecting the points into a triangular mesh. In this paper we describe and evaluate a new discreet form of Gauss’ theorem which enables the calculation of the volume of any enclosed surface described by a triangular mesh. The volume is calculated by summing the vector product of the centroid, area and normal of each surface triangle. The algorithm was tested on computer-generated objects, US-scanned balloons, livers and kidneys and CT-scanned clay rocks. The results, expressed as the mean percentage difference ± one standard deviation were 1.2 ± 2.3, 5.5 ± 4.7, 3.0 ± 3.2 and −1.2 ± 3.2% for balloons, livers, kidneys and rocks respectively. The results compare favourably with other volume estimation methods such as planimetry and tetrahedral decomposition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new system is described for estimating volume from a series of multiplanar 2D ultrasound images. Ultrasound images are captured using a personal computer video digitizing card and an electromagnetic localization system is used to record the pose of the ultrasound images. The accuracy of the system was assessed by scanning four groups of ten cadaveric kidneys on four different ultrasound machines. Scan image planes were oriented either radially, in parallel or slanted at 30 C to the vertical. The cross-sectional images of the kidneys were traced using a mouse and the outline points transformed to 3D space using the Fastrak position and orientation data. Points on adjacent region of interest outlines were connected to form a triangle mesh and the volume of the kidneys estimated using the ellipsoid, planimetry, tetrahedral and ray tracing methods. There was little difference between the results for the different scan techniques or volume estimation algorithms, although, perhaps as expected, the ellipsoid results were the least precise. For radial scanning and ray tracing, the mean and standard deviation of the percentage errors for the four different machines were as follows: Hitachi EUB-240, −3.0 ± 2.7%; Tosbee RM3, −0.1 ± 2.3%; Hitachi EUB-415, 0.2 ± 2.3%; Acuson, 2.7 ± 2.3%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ubiquitylation is a necessary step in the endocytosis and lysosomal trafficking of many plasma membrane proteins and can also influence protein trafficking in the biosynthetic pathway. Although a molecular understanding of ubiquitylation in these processes is beginning to emerge, very little is known about the role deubiquitylation may play. Fat Facets in mouse (FAM) is substrate-specific deubiquitylating enzyme highly expressed in epithelia where it interacts with its substrate, β-catenin. Here we show, in the polarized intestinal epithelial cell line T84, FAM localized to multiple points of protein trafficking. FAM interacted with β-catenin and E-cadherin in T84 cells but only in subconfluent cultures. FAM extensively colocalized with β-catenin in cytoplasmic puncta but not at sites of cell-cell contact as well as immunoprecipitating with β-catenin and E-cadherin from a higher molecular weight complex (~500 kDa). At confluence FAM neither colocalized with, nor immunoprecipitated, β-catenin or E-cadherin, which were predominantly in a larger molecular weight complex (~2 MDa) at the cell surface. Overexpression of FAM in MCF-7 epithelial cells resulted in increased β-catenin levels, which localized to the plasma membrane. Expression of E-cadherin in L-cell fibroblasts resulted in the relocalization of FAM from the Golgi to cytoplasmic puncta. These data strongly suggest that FAM associates with E-cadherin and β-catenin during trafficking to the plasma membrane.