786 resultados para online distribution
Resumo:
Background Researching male sex work offers insight into the sexual lives of men and women while developing a more realistic appreciation for the changing issues associated with male sex work. This type of research is important because it not only reflects a growing and diversifying consumer demand for male sex work, but also because it enables the construction of knowledge that is up-to-date with changing ideas around sex and sexualities. Discussion This paper discusses a range of issues emerging in the male sex industry. Notably, globalisation and technology have contributed to the normalisation of male sex work and reshaped the landscape in which the male sex industry operates. As part of this discussion, we review STI and HIV rates among male sex workers at a global level, which are widely disparate and geographically contextual, with rates of HIV among male sex workers ranging from 0% in some areas to 50% in others. The Internet has reshaped the way that male sex workers and clients connect and has been identified as a useful space for safer sex messages and research that seeks out hidden or commonly excluded populations. Future directions We argue for a public health context that recognises the emerging and changing nature of male sex work, which means programs and policies that are appropriate for this population group. Online communities relating to male sex work are important avenues for safer sexual messages and unique opportunities to reach often excluded sub-populations of both clients and male sex workers. The changing structure and organisation of male sex work alongside rapidly changing cultural, academic and medical discourses provide new insight but also new challenges to how we conceive the sexualities of men and male sex workers. Public health initiatives must reflect upon and incorporate this knowledge.
Resumo:
The majority of research examining massively multiplayer online game (MMOG)-based social relationships has used quantitative methodologies. The present study used qualitative semi-structured interviews with 22 Australian World of Warcraft (WoW) players to examine their experiences of MMOG-based social relationships. Interview transcripts underwent thematic analysis and revealed that participants reported experiencing an MMOG-based sense of community (a sense of belonging within the gaming or WoW community), discussed a number of different MMOG-based social identities (such as gamer, WoW player and guild or group member) and stated that they derived social support (a perception that one is cared for and may access resources from others within a group) from their relationships with other players. The findings of this study confirm that MMOG players can form gaming communities. Almost all participants accessed or provided in-game social support, and some gave or received broader emotional support. Players also identified as gamers and guild members. Fewer participants identified as WoW players. Findings indicated that changes to the game environment influence these relationships and further exploration of players' experiences could determine the optimal game features to enhance positive connections with fellow players.
Resumo:
We argue that safeguards are necessary to ensure human rights are adequately protected. All systems of blocking access to online content necessarily raise difficult and problematic issues of infringement of freedom of speech and access to information. Given the importance of access to information across the breadth of modern life, great care must be taken to ensure that any measures designed to protect copyright by blocking access to online locations are proportionate. Any measures to block access to online content must be carefully tailored to avoid serious and disproportionate impact on human rights. This means first that the measures must be effective and adapted to achieve a legitimate purpose. The experience of foreign jurisdictions suggests that this legislation is unlikely to be effective. Unless and until there is clear evidence that the proposed scheme is likely to increase effective returns to Australian creators, this legislation should not be introduced. Second, the principle of proportionality requires ensuring that the proposed legislation does not unnecessarily burden legitimate speech or access to information. As currently worded, the draft legislation may result in online locations being blocked even though they would, if operated in Australia, not contravene Australian law. This is unacceptable, and if introduced, the law should be drafted so that it is clearly limited only to foreign locations where there is clear and compelling evidence that the location would authorise copyright infringement if it were in Australia. Third, proportionality requires that measures are reasonable and strike an appropriate balance between competing interests. This draft legislation provides few safeguards for the public interest or the interests of private actors who would access legitimate information. New safeguards should be introduced to ensure that the public interest is well represented at both the stage of the primary application and at any applications to rescind or vary injunctions. We recommend that: The legislation not be introduced unless and until there is compelling evidence that it will have a real and significant positive impact on the effective incomes of Australian creators. The ‘facilitates an infringement’ test in s 115A(1)(b) should be replaced with ‘authorises infringement’. The ‘primary purpose’ test in s 115A(1)(c) should be replaced with: “the online location has no substantial non-infringing uses”. An explicit role for public interest groups as amici curiae should be introduced. Costs of successful applications should be borne by applicants. Injunctions should be valid only for renewable two year terms. Section 115A(5) should be clarified, and cl (b) and (c) be removed. The effectiveness of the scheme should be evaluated in two years.
Resumo:
Background The use of the internet to access information is rapidly increasing; however, the quality of health information provided on various online sites is questionable. We aimed to examine the underlying factors that guide parents' decisions to use online information to manage their child's health care, a behaviour which has not yet been explored systematically. Methods Parents (N=391) completed a questionnaire assessing the standard theory of planned behaviour (TPB) measures of attitude, subjective norm, perceived behavioural control (PBC), and intention as well as the underlying TPB belief-based items (i.e., behavioural, normative, and control beliefs) in addition to a measure of perceived risk and demographic variables. Two months later, consenting parents completed a follow-up telephone questionnaire which assessed the decisions they had made regarding their use of online information to manage their child's health care during the previous 2 months. Results We found support for the TPB constructs of attitude, subjective norm, and PBC as well as the additional construct of perceived risk in predicting parents' intentions to use online information to manage their child's health care, with further support found for intentions, but not PBC, in predicting parents' behaviour. The results of the TPB belief-based analyses also revealed important information about the critical beliefs that guide parents' decisions to engage in this child health management behaviour. Conclusions This theory-based investigation to understand parents' motivations and online information-seeking behaviour is key to developing recommendations and policies to guide more appropriate help-seeking actions among parents.
Resumo:
This chapter is based on a qualitative case study that researched the perceptions of nine male and female pre-service English teachers’ in regards to their preparedness to mentor positive digital conduct in Social network sites (SNS). These sites enable individuals to perform public representations of identity, consumed by virtual audiences, with various degrees of perceived privacy. The chapter frames what we call “identity curation” through three theoretical lenses; of performativity, customisation and critical literacy. This chapter discusses one of the themes that emerged from the research, which is the way in which “normalised” and naturalised representations of femineity on SNS were judged more harshly than masculine representations.
Resumo:
Electrical impedance tomography is a novel technology capable of quantifying ventilation distribution in the lung in real time during various therapeutic manoeuvres. The technique requires changes to the patient’s position to place the electrical impedance tomography electrodes circumferentially around the thorax. The impact of these position changes on the time taken to stabilise the regional distribution of ventilation determined by electrical impedance tomography is unknown. This study aimed to determine the time taken for the regional distribution of ventilation determined by electrical impedance tomography to stabilise after changing position. Eight healthy, male volunteers were connected to electrical impedance tomography and a pneumotachometer. After 30 minutes stabilisation supine, participants were moved into 60 degrees Fowler’s position and then returned to supine. Thirty minutes was spent in each position. Concurrent readings of ventilation distribution and tidal volumes were taken every five minutes. A mixed regression model with a random intercept was used to compare the positions and changes over time. The anterior-posterior distribution stabilised after ten minutes in Fowler’s position and ten minutes after returning to supine. Left-right stabilisation was achieved after 15 minutes in Fowler’s position and supine. A minimum of 15 minutes of stabilisation should be allowed for spontaneously breathing individuals when assessing ventilation distribution. This time allows stabilisation to occur in the anterior-posterior direction as well as the left-right direction.
Resumo:
We propose an architecture for a rule-based online management systems (RuleOMS). Typically, many domain areas face the problem that stakeholders maintain databases of their business core information and they have to take decisions or create reports according to guidelines, policies or regulations. To address this issue we propose the integration of databases, in particular relational databases, with a logic reasoner and rule engine. We argue that defeasible logic is an appropriate formalism to model rules, in particular when the rules are meant to model regulations. The resulting RuleOMS provides an efficient and flexible solution to the problem at hand using defeasible inference. A case study of an online child care management system is used to illustrate the proposed architecture.
Resumo:
It’s the stuff of nightmares: your intimate images are leaked and posted online by somebody you thought you could trust. But in Australia, victims often have no real legal remedy for this kind of abuse. This is the key problem of regulating the internet. Often, speech we might consider abusive or offensive isn’t actually illegal. And even when the law technically prohibits something, enforcing it directly against offenders can be difficult. It is a slow and expensive process, and where the offender or the content is overseas, there is virtually nothing victims can do. Ultimately, punishing intermediaries for content posted by third parties isn’t helpful. But we do need to have a meaningful conversation about how we want our shared online spaces to feel. The providers of these spaces have a moral, if not legal, obligation to facilitate this conversation.
Resumo:
Christmas has come early for copyright owners in Australia. The film company, Roadshow, the pay television company Foxtel, and Rupert Murdoch's News Corp and News Limited--as well as copyright industries--have been clamoring for new copyright powers and remedies. In the summer break, the Coalition Government has responded to such entreaties from its industry supporters and donors, with a new package of copyright laws and policies. There has been significant debate over the proposals between the odd couple of Attorney-General George Brandis and the Minister for Communications, Malcolm Turnbull. There have been deep, philosophical differences between the two Ministers over the copyright agenda. The Attorney-General George Brandis has supported a model of copyright maximalism, with strong rights and remedies for the copyright empires in film, television, and publishing. He has shown little empathy for the information technology companies of the digital economy. The Attorney-General has been impatient to press ahead with a copyright regime. The Minister for Communications, Malcolm Turnbull, has been somewhat more circumspect, recognizing that there is a need to ensure that copyright laws do not adversely impact upon competition in the digital economy. The final proposal is a somewhat awkward compromise between the discipline-and-punish regime preferred by Brandis, and the responsive regulation model favored by Turnbull. In his new book, Information Doesn't Want to Be Free: Laws for the Internet Age, Cory Doctorow has some sage advice for copyright owners: Things that don't make money: Complaining about piracy. Calling your customers thieves. Treating your customers like thieves. In this context, the push by copyright owners and the Coalition Government to have a copyright crackdown may well be counter-productive to their interests.
Resumo:
Background and purpose There are no published studies on the parameterisation and reliability of the single-leg stance (SLS) test with inertial sensors in stroke patients. Purpose: to analyse the reliability (intra-observer/inter-observer) and sensitivity of inertial sensors used for the SLS test in stroke patients. Secondary objective: to compare the records of the two inertial sensors (trunk and lumbar) to detect any significant differences in the kinematic data obtained in the SLS test. Methods Design: cross-sectional study. While performing the SLS test, two inertial sensors were placed at lumbar (L5-S1) and trunk regions (T7–T8). Setting: Laboratory of Biomechanics (Health Science Faculty - University of Málaga). Participants: Four chronic stroke survivors (over 65 yrs old). Measurement: displacement and velocity, Rotation (X-axis), Flexion/Extension (Y-axis), Inclination (Z-axis); Resultant displacement and velocity (V): RV=(Vx2+Vy2+Vz2)−−−−−−−−−−−−−−−−−√ Along with SLS kinematic variables, descriptive analyses, differences between sensors locations and intra-observer and inter-observer reliability were also calculated. Results Differences between the sensors were significant only for left inclination velocity (p = 0.036) and extension displacement in the non-affected leg with eyes open (p = 0.038). Intra-observer reliability of the trunk sensor ranged from 0.889-0.921 for the displacement and 0.849-0.892 for velocity. Intra-observer reliability of the lumbar sensor was between 0.896-0.949 for the displacement and 0.873-0.894 for velocity. Inter-observer reliability of the trunk sensor was between 0.878-0.917 for the displacement and 0.847-0.884 for velocity. Inter-observer reliability of the lumbar sensor ranged from 0.870-0.940 for the displacement and 0.863-0.884 for velocity. Conclusion There were no significant differences between the kinematic records made by an inertial sensor during the development of the SLS testing between two inertial sensors placed in the lumbar and thoracic regions. In addition, inertial sensors. Have the potential to be reliable, valid and sensitive instruments for kinematic measurements during SLS testing but further research is needed.
Resumo:
Species distribution models (SDMs) are considered to exemplify Pattern rather than Process based models of a species' response to its environment. Hence when used to map species distribution, the purpose of SDMs can be viewed as interpolation, since species response is measured at a few sites in the study region, and the aim is to interpolate species response at intermediate sites. Increasingly, however, SDMs are also being used to also extrapolate species-environment relationships beyond the limits of the study region as represented by the training data. Regardless of whether SDMs are to be used for interpolation or extrapolation, the debate over how to implement SDMs focusses on evaluating the quality of the SDM, both ecologically and mathematically. This paper proposes a framework that includes useful tools previously employed to address uncertainty in habitat modelling. Together with existing frameworks for addressing uncertainty more generally when modelling, we then outline how these existing tools help inform development of a broader framework for addressing uncertainty, specifically when building habitat models. As discussed earlier we focus on extrapolation rather than interpolation, where the emphasis on predictive performance is diluted by the concerns for robustness and ecological relevance. We are cognisant of the dangers of excessively propagating uncertainty. Thus, although the framework provides a smorgasbord of approaches, it is intended that the exact menu selected for a particular application, is small in size and targets the most important sources of uncertainty. We conclude with some guidance on a strategic approach to identifying these important sources of uncertainty. Whilst various aspects of uncertainty in SDMs have previously been addressed, either as the main aim of a study or as a necessary element of constructing SDMs, this is the first paper to provide a more holistic view.
Resumo:
We propose a new information-theoretic metric, the symmetric Kullback-Leibler divergence (sKL-divergence), to measure the difference between two water diffusivity profiles in high angular resolution diffusion imaging (HARDI). Water diffusivity profiles are modeled as probability density functions on the unit sphere, and the sKL-divergence is computed from a spherical harmonic series, which greatly reduces computational complexity. Adjustment of the orientation of diffusivity functions is essential when the image is being warped, so we propose a fast algorithm to determine the principal direction of diffusivity functions using principal component analysis (PCA). We compare sKL-divergence with other inner-product based cost functions using synthetic samples and real HARDI data, and show that the sKL-divergence is highly sensitive in detecting small differences between two diffusivity profiles and therefore shows promise for applications in the nonlinear registration and multisubject statistical analysis of HARDI data.
Resumo:
Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.
Resumo:
Fractional anisotropy (FA), a very widely used measure of fiber integrity based on diffusion tensor imaging (DTI), is a problematic concept as it is influenced by several quantities including the number of dominant fiber directions within each voxel, each fiber's anisotropy, and partial volume effects from neighboring gray matter. With High-angular resolution diffusion imaging (HARDI) and the tensor distribution function (TDF), one can reconstruct multiple underlying fibers per voxel and their individual anisotropy measures by representing the diffusion profile as a probabilistic mixture of tensors. We found that FA, when compared with TDF-derived anisotropy measures, correlates poorly with individual fiber anisotropy, and may sub-optimally detect disease processes that affect myelination. By contrast, mean diffusivity (MD) as defined in standard DTI appears to be more accurate. Overall, we argue that novel measures derived from the TDF approach may yield more sensitive and accurate information than DTI-derived measures.
Resumo:
High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.