929 resultados para Aleph- According
Resumo:
Background: Bone healing is sensitive to the initial mechanical conditions with tissue differentiation being determined within days of trauma. Whilst axial compression is regarded as stimulatory, the role of interfragmentary shear is controversial. The purpose of this study was to determine how the initial mechanical conditions produced by interfragmentary shear and torsion differ from those produced by axial compressive movements. ----- ----- Methods: The finite element method was used to estimate the strain, pressure and fluid flow in the early callus tissue produced by the different modes of interfragmentary movement found in vivo. Additionally, tissue formation was predicted according to three principally different mechanobiological theories. ----- ----- Findings: Large interfragmentary shear movements produced comparable strains and less fluid flow and pressure than moderate axial interfragmentary movements. Additionally, combined axial and shear movements did not result in overall increases in the strains and the strain magnitudes were similar to those produced by axial movements alone. Only when axial movements where applied did the non-distortional component of the pressure–deformation theory influence the initial tissue predictions. ----- ----- Interpretation: This study found that the mechanical stimuli generated by interfragmentary shear and torsion differed from those produced by axial interfragmentary movements. The initial tissue formation as predicted by the mechanobiological theories was dominated by the deformation stimulus.
Resumo:
Secondary fracture healing in long bones leads to the successive formation of intricate patterns of tissues in the newly formed callus. The main aim of this work was to quantitatively describe the topology of these tissue patterns at different stages of the healing process and to generate averaged images of tissue distribution. This averaging procedure was based on stained histological sections (2, 3, 6, and 9 weeks post-operatively) of 64 sheep with a 3 mm tibial mid-shaft osteotomy, stabilized either with a rigid or a semi-rigid external fixator. Before averaging, histological images were sorted for topology according to six identified tissue patterns. The averaged images were obtained for both fixation types and the lateral and medial side separately. For each case, the result of the averaging procedure was a collection of six images characterizing quantitatively the progression of the healing process. In addition, quantified descriptions of the newly formed cartilage and the bone area fractions (BA/TA) of the bony callus are presented. For all cases, a linear increase in the BA/TA of the bony callus was observed. The slope was greatest in the case of the most rigid stabilization and lowest in the case of the least stiff. This topological description of the progression of bone healing will allow quantitative validation (or falsification) of current mechano-biological theories.
Resumo:
There has been increasing international efforts to ensure that health care policies are evidence based. One area where there is a lack of ‘effectiveness’ evidence is in the use of end-of-life care pathways (EOLCP) (1). Despite the lack of evidence supporting the efficacy of the EOCLP, their use has been endorsed in the recent national palliative care strategy document in the UK (2). In addition, a publication endorsed by the Australian Government (titled: Supporting Australians to live well at the End of Life- National Palliative Care Strategy 2010) (3), recommended a national roll out of EOLCP across all sectors (primary, acute and aged care) in Australia. According to this document, it is a measure of “appropriateness” and “effectiveness” for promoting quality end-of-life care.
Resumo:
The oriented single crystal Raman spectrum of leiteite has been obtained and the spectra related to the structure of the mineral. The intensities of the observed bands vary according to orientation allowing them to be assigned to either Ag or Bg modes. Ag bands are generally the most intense in the CAAC spectrum, followed by ACCA, CBBC, and ABBA whereas Bg bands are generally the most intense in the CBAC followed by ABCA. The CAAC and ACCA spectra are identical, as are those obtained in the CBBC and ABBA orientations. Both cross-polarised spectra are identical. Band assignments were made with respect to bridging and non-bridging As-O bonds.
Resumo:
In recent years, there is a dramatic growth in number and popularity of online social networks. There are many networks available with more than 100 million registered users such as Facebook, MySpace, QZone, Windows Live Spaces etc. People may connect, discover and share by using these online social networks. The exponential growth of online communities in the area of social networks attracts the attention of the researchers about the importance of managing trust in online environment. Users of the online social networks may share their experiences and opinions within the networks about an item which may be a product or service. The user faces the problem of evaluating trust in a service or service provider before making a choice. Recommendations may be received through a chain of friends network, so the problem for the user is to be able to evaluate various types of trust opinions and recommendations. This opinion or recommendation has a great influence to choose to use or enjoy the item by the other user of the community. Collaborative filtering system is the most popular method in recommender system. The task in collaborative filtering is to predict the utility of items to a particular user based on a database of user rates from a sample or population of other users. Because of the different taste of different people, they rate differently according to their subjective taste. If two people rate a set of items similarly, they share similar tastes. In the recommender system, this information is used to recommend items that one participant likes, to other persons in the same cluster. But the collaborative filtering system performs poor when there is insufficient previous common rating available between users; commonly known as cost start problem. To overcome the cold start problem and with the dramatic growth of online social networks, trust based approach to recommendation has emerged. This approach assumes a trust network among users and makes recommendations based on the ratings of the users that are directly or indirectly trusted by the target user.
Resumo:
How does the image of the future operate upon history, and upon national and individual identities? To what extent are possible futures colonized by the image? What are the un-said futurecratic discourses that underlie the image of the future? Such questions inspired the examination of Japan’s futures images in this thesis. The theoretical point of departure for this examination is Polak’s (1973) seminal research into the theory of the ‘image of the future’ and seven contemporary Japanese texts which offer various alternative images for Japan’s futures, selected as representative of a ‘national conversation’ about the futures of that nation. These seven images of the future are: 1. Report of the Prime Minister’s Commission on Japan’s Goals in the 21st Century—The Frontier Within: Individual Empowerment and Better Governance in the New Millennium, compiled by a committee headed by Japan’s preeminent Jungian psychologist Kawai Hayao (1928-2007); 2. Slow Is Beautiful—a publication by Tsuji Shinichi, in which he re-images Japan as a culture represented by the metaphor of the sloth, concerned with slow and quality-oriented livingry as a preferred image of the future to Japan’s current post-bubble cult of speed and economic efficiency; 3. MuRatopia is an image of the future in the form of a microcosmic prototype community and on-going project based on the historically significant island of Awaji, and established by Japanese economist and futures thinker Yamaguchi Kaoru; 4. F.U.C.K, I Love Japan, by author Tanja Yujiro provides this seven text image of the future line-up with a youth oriented sub-culture perspective on that nation’s futures; 5. IMAGINATION / CREATION—a compilation of round table discussions about Japan’s futures seen from the point of view of Japan’s creative vanguard; 6. Visionary People in a Visionless Country: 21 Earth Connecting Human Stories is a collection of twenty one essays compiled by Denmark born Tokyo resident Peter David Pedersen; and, 7. EXODUS to the Land of Hope, authored by Murakami Ryu, one of Japan’s most prolific and influential writers, this novel suggests a future scenario portraying a massive exodus of Japan’s youth, who, literate with state-of-the-art information and communication technologies (ICTs) move en masse to Japan’s northern island of Hokkaido to launch a cyber-revolution from the peripheries. The thesis employs a Futures Triangle Analysis (FTA) as the macro organizing framework and as such examines both pushes of the present and weights from the past before moving to focus on the pulls to the future represented by the seven texts mentioned above. Inayatullah’s (1999) Causal Layered Analysis (CLA) is the analytical framework used in examining the texts. Poststructuralist concepts derived primarily from the work of Michel Foucault are a particular (but not exclusive) reference point for the analytical approach it encompasses. The research questions which reflect the triangulated analytic matrix are: 1. What are the pushes—in terms of current trends—that are affecting Japan’s futures? 2. What are the historical and cultural weights that influence Japan’s futures? 3. What are the emerging transformative Japanese images of the future discourses, as embodied in actual texts, and what potential do they offer for transformative change in Japan? Research questions one and two are discussed in Chapter five and research question three is discussed in Chapter six. The first two research questions should be considered preliminary. The weights outlined in Chapter five indicate that the forces working against change in Japan are formidable, structurally deep-rooted, wide-spread, and under-recognized as change-adverse. Findings and analyses of the push dimension reveal strong forces towards a potentially very different type of Japan. However it is the seven contemporary Japanese images of the future, from which there is hope for transformative potential, which form the analytical heart of the thesis. In analyzing these texts the thesis establishes the richness of Japan’s images of the future and, as such, demonstrates the robustness of Japan’s stance vis-à-vis the problem of a perceived map-less and model-less future for Japan. Frontier is a useful image of the future, whose hybrid textuality, consisting of government, business, academia, and creative minority perspectives, demonstrates the earnestness of Japan’s leaders in favour of the creation of innovative futures for that nation. Slow is powerful in its aim to reconceptualize Japan’s philosophies of temporality, and build a new kind of nation founded on the principles of a human-oriented and expanded vision of economy based around the core metaphor of slowness culture. However its viability in Japan, with its post-Meiji historical pushes to an increasingly speed-obsessed social construction of reality, could render it impotent. MuRatopia is compelling in its creative hybridity indicative of an advanced IT society, set in a modern day utopian space based upon principles of a high communicative social paradigm, and sustainability. IMAGINATION / CREATION is less the plan than the platform for a new discussion on Japan’s transformation from an econo-centric social framework to a new Creative Age. It accords with emerging discourses from the Creative Industries, which would re-conceive of Japan as a leading maker of meaning, rather than as the so-called guzu, a term referred to in the book meaning ‘laggard’. In total, Love Japan is still the most idiosyncratic of all the images of the future discussed. Its communication style, which appeals to Japan’s youth cohort, establishes it as a potentially formidable change agent in a competitive market of futures images. Visionary People is a compelling image for its revolutionary and subversive stance against Japan’s vision-less political leadership, showing that it is the people, not the futures-making elite or aristocracy who must take the lead and create a new vanguard for the nation. Finally, Murakami’s Exodus cannot be ruled out as a compelling image of the future. Sharing the appeal of Tanja’s Love Japan to an increasingly disenfranchised youth, Exodus portrays a near-term future that is achievable in the here and now, by Japan’s teenagers, using information and communications technologies (ICTs) to subvert leadership, and create utopianist communities based on alternative social principles. The principal contribution from this investigation in terms of theory belongs to that of developing the Japanese image of the future. In this respect, the literature reviews represent a significant compilation, specifically about Japanese futures thinking, the Japanese image of the future, and the Japanese utopia. Though not exhaustive, this compilation will hopefully serve as a useful starting point for future research, not only for the Japanese image of the future, but also for all image of the future research. Many of the sources are in Japanese and their English summations are an added reason to respect this achievement. Secondly, the seven images of the future analysed in Chapter six represent the first time that Japanese image of the future texts have been systematically organized and analysed. Their translation from Japanese to English can be claimed as a significant secondary contribution. What is more, they have been analysed according to current futures methodologies that reveal a layeredness, depth, and overall richness existing in Japanese futures images. Revealing this image-richness has been one of the most significant findings of this investigation, suggesting that there is fertile research to be found from this still under-explored field, whose implications go beyond domestic Japanese concerns, and may offer fertile material for futures thinkers and researchers, Japanologists, social planners, and policy makers.
Resumo:
The combination of alcohol and driving is a major health and economic burden to most communities in industrialised countries. The total cost of crashes for Australia in 1996 was estimated at approximately 15 billion dollars and the costs for fatal crashes were about 3 billion dollars (BTE, 2000). According to the Bureau of Infrastructure, Transport and Regional Development and Local Government (2009; BITRDLG) the overall cost of road fatality crashes for 2006 $3.87 billion, with a single fatal crash costing an estimated $2.67 million. A major contributing factor to crashes involving serious injury is alcohol intoxication while driving. It is a well documented fact that consumption of liquor impairs judgment of speed, distance and increases involvement in higher risk behaviours (Waller, Hansen, Stutts, & Popkin, 1986a; Waller et al., 1986b). Waller et al. (1986a; b) asserts that liquor impairs psychomotor function and therefore renders the driver impaired in a crisis situation. This impairment includes; vision (degraded), information processing (slowed), steering, and performing two tasks at once in congested traffic (Moskowitz & Burns, 1990). As BAC levels increase the risk of crashing and fatality increase exponentially (Department of Transport and Main Roads, 2009; DTMR). According to Compton et al. (2002) as cited in the Department of Transport and Main Roads (2009), crash risk based on probability, is five times higher when the BAC is 0.10 compared to a BAC of 0.00. The type of injury patterns sustained also tends to be more severe when liquor is involved, especially with injuries to the brain (Waller et al., 1986b). Single and Rohl (1997) reported that 30% of all fatal crashes in Australia where alcohol involvement was known were associated with Breadth Analysis Content (BAC) above the legal limit of 0.05gms/100ml. Alcohol related crashes therefore contributes to a third of the total cost of fatal crashes (i.e. $1 billion annually) and crashes where alcohol is involved are more likely to result in death or serious injury (ARRB Transport Research, 1999). It is a major concern that a drug capable of impairment such as is the most available and popular drug in Australia (Australian Institute of Health and Welfare, 2007; AIHW). According to the AIHW (2007) 89.9% of the approximately 25,000 Australians over the age of 14 surveyed had consumed at some point in time, and 82.9% had consumed liquor in the previous year. This study found that 12.1% of individuals admitted to driving a motor vehicle whilst intoxicated. In general males consumed more liquor in all age groups. In Queensland there were 21503 road crashes in 2001, involving 324 fatalities and the largest contributing factor was alcohol and or drugs (Road Traffic Report, 2001). 23438 road crashes in 2004, involving 289 fatalities and the largest contributing factor was alcohol and or drugs (DTMR, 2009). Although a number of measures such as random breath testing have been effective in reducing the road toll (Watson, Fraine & Mitchell, 1995) the recidivist drink driver remains a serious problem. These findings were later supported with research by Leal, King, and Lewis (2006). This Queensland study found that of the 24661 drink drivers intercepted in 2004, 3679 (14.9%) were recidivists with multiple drink driving convictions in the previous three years covered (Leal et al., 2006). The legal definition of the term “recidivist” is consistent with the Transport Operations (Road Use Management) Act (1995) and is assigned to individuals who have been charged with multiple drink driving offences in the previous five years. In Australia relatively little attention has been given to prevention programs that target high-risk repeat drink drivers. However, over the last ten years a rehabilitation program specifically designed to reduce recidivism among repeat drink drivers has been operating in Queensland. The program, formally known as the “Under the Limit” drink driving rehabilitation program (UTL) was designed and implemented by the research team at the Centre for Accident Research and Road Safety in Queensland with funding from the Federal Office of Road Safety and the Institute of Criminology (see Sheehan, Schonfeld & Davey, 1995). By 2009 over 8500 drink-drivering offenders had been referred to the program (Australian Institute of Crime, 2009).
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
Background: We examined whether registered and unregistered donors’ perceptions about transplant recipients’ previous behavior (e.g., substance use) and responsibility for illness differed based on their deceased organ donor registration decisions. ----- ----- ----- Methods: Students and community members from Queensland, Australia were surveyed about their perceptions of transplant recipients.----- ----- ----- Results: Respondents (N = 465) were grouped based on their organ donor registration status to determine if their perceptions about transplant recipients differed. Compared to registered respondents, a higher proportion of unregistered respondents held more negative and less favorable perceptions of recipients. Multivariate analysis of variance confirmed statistically that unregistered respondents evaluated recipients more negatively than registered respondents, F(6,449) = 5.33, p <.001. Unregistered respondents were more likely to view recipients as a smoker, substance user, or alcohol dependent and as undeserving of a transplant, blameworthy, and responsible for their illness. ----- ----- ----- Conclusion: Potential donors’ perceptions of transplant recipients’ behavior and responsibility for illness differ according to their registration status. Future interventions should challenge negative perceptions about recipients’ deservingness and responsibility and promote the perspective that people from all walks of life need transplants in the aim of ultimately encouraging an increase in donor registration.
Resumo:
Human hair fibres are ubiquitous in nature and are found frequently at crime scenes often as a result of exchange between the perpetrator, victim and/or the surroundings according to Locard's Principle. Therefore, hair fibre evidence can provide important information for crime investigation. For human hair evidence, the current forensic methods of analysis rely on comparisons of either hair morphology by microscopic examination or nuclear and mitochondrial DNA analyses. Unfortunately in some instances the utilisation of microscopy and DNA analyses are difficult and often not feasible. This dissertation is arguably the first comprehensive investigation aimed to compare, classify and identify the single human scalp hair fibres with the aid of FTIR-ATR spectroscopy in a forensic context. Spectra were collected from the hair of 66 subjects of Asian, Caucasian and African (i.e. African-type). The fibres ranged from untreated to variously mildly and heavily cosmetically treated hairs. The collected spectra reflected the physical and chemical nature of a hair from the near-surface particularly, the cuticle layer. In total, 550 spectra were acquired and processed to construct a relatively large database. To assist with the interpretation of the complex spectra from various types of human hair, Derivative Spectroscopy and Chemometric methods such as Principal Component Analysis (PCA), Fuzzy Clustering (FC) and Multi-Criteria Decision Making (MCDM) program; Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Geometrical Analysis for Interactive Aid (GAIA); were utilised. FTIR-ATR spectroscopy had two important advantages over to previous methods: (i) sample throughput and spectral collection were significantly improved (no physical flattening or microscope manipulations), and (ii) given the recent advances in FTIR-ATR instrument portability, there is real potential to transfer this work.s findings seamlessly to on-field applications. The "raw" spectra, spectral subtractions and second derivative spectra were compared to demonstrate the subtle differences in human hair. SEM images were used as corroborative evidence to demonstrate the surface topography of hair. It indicated that the condition of the cuticle surface could be of three types: untreated, mildly treated and treated hair. Extensive studies of potential spectral band regions responsible for matching and discrimination of various types of hair samples suggested the 1690-1500 cm-1 IR spectral region was to be preferred in comparison with the commonly used 1750-800 cm-1. The principal reason was the presence of the highly variable spectral profiles of cystine oxidation products (1200-1000 cm-1), which contributed significantly to spectral scatter and hence, poor hair sample matching. In the preferred 1690-1500 cm-1 region, conformational changes in the keratin protein attributed to the α-helical to β-sheet transitions in the Amide I and Amide II vibrations and played a significant role in matching and discrimination of the spectra and hence, the hair fibre samples. For gender comparison, the Amide II band is significant for differentiation. The results illustrated that the male hair spectra exhibit a more intense β-sheet vibration in the Amide II band at approximately 1511 cm-1 whilst the female hair spectra displayed more intense α-helical vibration at 1520-1515cm-1. In terms of chemical composition, female hair spectra exhibit greater intensity of the amino acid tryptophan (1554 cm-1), aspartic and glutamic acid (1577 cm-1). It was also observed that for the separation of samples based on racial differences, untreated Caucasian hair was discriminated from Asian hair as a result of having higher levels of the amino acid cystine and cysteic acid. However, when mildly or chemically treated, Asian and Caucasian hair fibres are similar, whereas African-type hair fibres are different. In terms of the investigation's novel contribution to the field of forensic science, it has allowed for the development of a novel, multifaceted, methodical protocol where previously none had existed. The protocol is a systematic method to rapidly investigate unknown or questioned single human hair FTIR-ATR spectra from different genders and racial origin, including fibres of different cosmetic treatments. Unknown or questioned spectra are first separated on the basis of chemical treatment i.e. untreated, mildly treated or chemically treated, genders, and racial origin i.e. Asian, Caucasian and African-type. The methodology has the potential to complement the current forensic analysis methods of fibre evidence (i.e. Microscopy and DNA), providing information on the morphological, genetic and structural levels.
Resumo:
What informs members of the church community as they learn? Do the ways people engage with information differ according to the circumstances in which they learn? Informed learning, or the ways in which people use information in the learning experience and the degree to which they are aware of that, has become a focus of contemporary information literacy research. This essay explores the nature of informed learning in the context of the church as a learning community. It is anticipated that insights resulting from this exploration may help church organisations, church leaders and lay people to consider how information can be used to grow faith, develop relationships, manage the church and respond to religious knowledge, which support the pursuit of spiritual wellness and the cultivation of lifelong learning. Information professionals within the church community and the broader information profession are encouraged to foster their awareness of the impact that engagement with information has in the learning experience and in the prioritising of lifelong learning in community contexts.
Resumo:
Australian National Cinema and Australian Television Culture. These two books, magisterial accounts of Australian media cultures, are very different. The first analyses (according to its cover blurb) 'the distinct and diverse nature of Australian cinema'; the second offers 'a picture of Australian television'. The books share an author.2 Despite this, their objects of study are constituted very differently. The first is replete with examples of particular films, analyses of their representational strategies, and links to the social context of production. The second addresses almost no programs and those that are mentioned appear only in passing. There is no analysis of any particular television text. The difference between these books cannot be explained in terms of authorial fickleness: rather, it represents the different ways in which television and film have been constructed as objects of study. While film has a recognised canon and a tradition of close textual analysis, in the study of television the programs themselves have tended to vanish - as they do in Australian Television Culture. Most academic work on Australian television is not interested in its programs. Writers have tended to find other aspects more rewarding: industries, institutions, ownership, legislation, technology and production.3 Australian Television Culture is part of this tradition; and an example of how such work, done well, can be a useful contribution to understanding the medium.
Resumo:
According to Zygmunt Bauman in Liquid Modernity (2000), the formerly solid and stable institutions of social life that characterised earlier stages of modernity have become fluid. He sees this as an outcome of the modernist project of progress itself, which in seeking to dismantle oppressive structures failed to reconstruct new roles for society, community and the individual. The un-tethering of social life from tradition in the latter stages of the twentieth century has produced unprecedented freedoms and unparalleled uncertainties, at least in the West. Although Bauman’s elaboration of some of the features and drivers of liquid modernity – increased mobility, rapid communications technologies, individualism – suggests it to be a neologism for globalisation, it is arguably also the context which has allowed this phenomenon to flourish. The qualities of fluidity, leakage, and flow that distinguish uncontained liquids also characterise globalisation, which encompasses a range of global trends and processes no longer confined to, or controlled by, the ‘container’ of the nation or state. The concept of liquid modernity helps to explain the conditions under which globalisation discourses have found a purchase and, by extension, the world in which contemporary children’s literature, media, and culture are produced. Perhaps more significantly, it points to the fluid conceptions of self and other that inform the ‘liquid’ worldview of the current generation of consumers of texts for children and young adults. This generation is growing up under the phase of globalisation we describe in this chapter.
Resumo:
This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.
Resumo:
We have read with great interest the retrospective study by Caffaro and Avanzi1 evaluating the relation between narrowing of the spinal canal and neurological deficits in patients with burst-type fractures of the spine. The authors are to be commended for obtaining detailed neurological and radiological data in a large cohort of 227 patients. The authors conclude: “The percentage of narrowing of the spinal canal proved to be a pre-disposing factor for the severity of the neurological status in thoracolumbar and lumbar burst-type fractures according to the classifications of Denis and Magerl.” Although this conclusion is mainly in accordance with previous findings, we would like to comment on the methodological approach applied in the current study.