• Osman Pedersen postete ein Update vor 12 Monaten

    Coronary artery calcium (CAC) quantified on computed tomography (CT) scans is a robust predictor of atherosclerotic coronary disease; however, the feasibility and relevance of quantitating CAC from lung cancer radiotherapy planning CT scans is unknown. We used a previously validated deep learning (DL) model to assess whether CAC is a predictor of all-cause mortality and major adverse cardiac events (MACEs).

    Retrospective analysis of non-contrast-enhanced radiotherapy planning CT scans from 428 patients with locally advanced lung cancer is performed. The DL-CAC algorithm was previously trained on 1,636 cardiac-gated CT scans and tested on four clinical trial cohorts. Plaques ≥ 1 cubic millimeter were measured to generate an Agatston-like DL-CAC score and grouped as DL-CAC = 0 (very low risk) and DL-CAC ≥ 1 (elevated risk). Cox and Fine and Gray regressions were adjusted for lung cancer and cardiovascular factors.

    The median follow-up was 18.1 months. The majority (61.4%) had a DL-CAC ≥ 1. There was an inmated cardiac risk screening before cancer therapy begins.Pedigree problems are typical genetics tasks in schools. They are well suited to help students learn scientific reasoning, representing realistic genetic problems. However, pedigree problems also pose complex requirements, especially for secondary students. They require a suitable solution strategy and technical knowledge. In this study, we examined the approaches used by N = 89 secondary school students when solving two different pedigree problems. In our qualitative analysis of student responses, we examined how two groups of secondary students with varying degrees of experience in genetics constructed arguments to support their decisions. To do so, we categorized I = 516 propositions from students‘ responses using theory- and data-driven codes. Comparison between groups revealed that „advanced genetics“ students (n = 44) formulated more arguments, referred more frequently to specific family constellations, and considered superficial pedigree features less often. Conversely, „beginning genetics“ students did not use a conclusive approach of step-by-step falsification but argued for the mode of inheritance they believed was correct. Advanced genetics students, in contrast to beginners, to some extent used a falsification strategy. Finally, we demonstrate which family members students used in their decisions and discuss a variety of typical but unreliable arguments.Items that are held in visual working memory can guide attention toward matching features in the environment. Predominant theories propose that to guide attention, a memory item must be internally prioritized and given a special template status, which builds on the assumption that there are qualitatively distinct states in working memory. Here, we propose that no distinct states in working memory are necessary to explain why some items guide attention and others do not. Instead, we propose variations in attentional guidance arise because individual memories naturally vary in their representational fidelity, and only highly accurate memories automatically guide attention. Across a series of experiments and a simulation we show that (a) items in working memory vary naturally in representational fidelity; (b) attention is guided by all well-represented items, though frequently only one item is represented well enough to guide; and (c) no special working memory state for prioritized items is necessary to explain guidance. These findings challenge current models of attentional guidance and working memory and instead support a simpler account for how working memory and attention interact Only the representational fidelity of memories, which varies naturally between items, determines whether and how strongly a memory representation guides attention. (PsycInfo Database Record (c) 2022 APA, all rights reserved).For vision and audition to accurately inform judgments about an object’s location, the brain must reconcile the variable anatomical correspondence of the eyes and ears, and the different frames of reference in which stimuli are initially encoded. To do so, it has been suggested that multisensory cues are eventually represented within a common frame of reference. If this is the case, then they should be similarly susceptible to distortion of this reference frame. Following this reasoning, we asked participants to locate visual and auditory probes in a crossmodal variant of the induced Roelofs effect, a visual illusion in which a large, off-center visual frame biases the observer’s perceived straight-ahead. Auditory probes were mislocalized in the same direction and with a similar magnitude as visual probes due to the off-center visual frame. However, an off-center auditory frame did not elicit a significant mislocalization of visual probes, indicating that auditory context does not elicit an induced Roelofs effect. These results suggest that the locations of auditory and visual stimuli are represented within a common frame of reference, but that the brain does not rely on stationary auditory context, as it does visual, to maintain this reference frame. (PsycInfo Database Record (c) 2022 APA, all rights reserved).Much recent research and theorizing in the field of reasoning has been concerned with intuitive sensitivity to logical validity, such as the logic-brightness effect, in which logically valid arguments are judged to have a „brighter“ typeface than invalid arguments. We propose and test a novel signal competition account of this phenomenon. Our account makes two assumptions (a) as per the demands of the logic-brightness task, people attempt to find a perceptual signal to guide brightness judgments, but (b) when the perceptual signal is hard to discern, they instead attend to cues such as argument validity. Experiment 1 tested this account by manipulating the difficulty of the perceptual contrast. When contrast discrimination was relatively difficult, we replicated the logic-brightness effect. When the discrimination was easy, the effect was eliminated. Experiment 2 manipulated the ambiguity of the perceptual task, comparing discrimination performance when the perceptual contrast was labeled in terms of rating „brightness“ or „darkness“. When the less ambiguous darkness labeling was used, there was no evidence of a logic-brightness effect. In both experiments, individual sensitivity to the perceptual discrimination was negatively correlated with sensitivity to argument validity. Hierarchical latent mixture modeling revealed distinct individual strategies responses based on perceptual cues, responses based on validity or guessing. Consistent with the signal competition account, the proportion of those responding to validity increased with perceptual discrimination difficulty or task ambiguity. The results challenge explanations of the logic-brightness effect based on parallel dual-process models of reasoning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).Working memory (WM) has a limited capacity; however, this limitation can be mitigated by selecting individual items from the set currently held in WM for prioritization. The selection mechanism underlying this prioritization ability is referred to as the focus of attention (FOA) in WM. Although impressive progress has been achieved in recent years, a fundamental question remains unclear Do perception and WM share one FOA? In the current study, we investigated the hypothesis that only a perceptual task tapping object-based attention can divert the FOA in WM. We adopted a retro-cue WM paradigm and inserted a perceptual task after the offset of the cue. Critically, we manipulated the type of attention (object-based attention in Experiments 1-3, feature-based attention in Experiment 4, and spatial attention in Experiment 5) consumed by the perceptual task. We found that participants were able to prioritize a retro-cued representation in WM, and the retro-cue benefit on memory accuracy was intact regardless of the perceptual task. Critically, the retro-cue benefit on the response time of WM task was significantly reduced only after an object-based attention perceptual task (Experiments 1, 2, 3a, and 3b), while remaining constant after a feature-based attention (Experiment 4) or spatial attention (Experiment 5) perceptual task. These results suggest that WM and perception share an object-based FOA, and an object-based attention perceptual task can divert the FOA in WM. Meanwhile, the current study further confirms that sustained attention is not necessary for selective maintenance in WM. (PsycInfo Database Record (c) 2022 APA, all rights reserved).In episodic memory research, there is a debate concerning whether decision-making in item recognition and source memory is better explained by models that assume all-or-none retrieval processes or continuous underlying strengths. One aspect in which these classes of models tend to differ is their predictions regarding the ability to retrieve contextual details (or source details) of an experienced event, given that the event itself is not recognized. All-or-none or high-threshold models predict that when items are unrecognized, source retrieval is not possible and only guess responses can be elicited. In contrast, models assuming continuous strengths predict that it is possible to retrieve the source of unrecognized items, albeit with low accuracy. Empirically, there have been numerous studies reporting either chance accuracy or above-chance accuracy for source memory in the absence of recognition. click here Crucially, studies presenting recognition and source judgements for the same item in immediate succession (simule Record (c) 2022 APA, all rights reserved).The forgetting curve is one of the most well known and established findings in memory research. Knowing the pattern of memory change over time can provide insight into underlying cognitive mechanisms. The default understanding is that forgetting follows a continuous, negatively accelerating function, such as a power function. We show that this understanding is incorrect. We first consider whether forgetting rates vary across different intervals of time reported in the literature. We found that there were different patterns of forgetting across different time periods. Next, we consider evidence that complex memories, such as those derived from event cognition, show different patterns, such as linear forgetting. Based on these findings, we argue that forgetting cannot be adequately explained by a single continuous function. As an alternative, we propose a Memory Phases Framework, through which the progress of memory can be divided into phases that parallel changes associated with neurological memory consolidation. These phases include (a) Working Memory (WM) during the first minute of retention, (b) Early Long-Term Memory (e-LTM) during the 12 hr following encoding, (c) a period of Transitional Long-Term Memory (t-LTM) during the following week or so, and (d) Long-Lasting Memory (LLM) memory beyond this. These findings are of significance for any field of study where being able to predict retention and forgetting is important, such as training, eyewitness memory, or clinical treatment. They are also important for evaluating behavioral or neuroscientific manipulations targeting memories over longer periods of time when different processes may be involved. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Coupon More
Logo