• Home
  • Publications
  • Our regions
  • Data
    • Continuum
    • Molecular lines
  • Consortium
  • Contact
  • Internal
ALMA IMF
  • Home
  • Publications
  • Our regions
  • Data
    • Continuum
    • Molecular lines
  • Consortium
  • Contact
  • Internal

ALMA-IMF Core Working Group, June 11 2020

6/11/2020

0 Comments

 
Connection:
https://ufl.zoom.us/j/99224867876?pwd=Y1ZWMWE0QW44aGN4bDI4QmdlZTl4Zz09

Attendees: Fred, Yohan, Sasha, Alex, Adam, Gemma, Fab, Ben, Hongli, Tapas, Patricio, Roberto, Sylvain
Not attending: Antoine


Agenda:
  1. The “Core” working group
Members and their main interests
  • WG1a: Core extraction and catalog post-selection (coordinator: Fabien)
  • WG1b: Temperature and free-free used for mass estimates (coordinators: Alex & Fred + Roberto for specific free-free questions)
  • WG1c: CMF and mass segregation

  1. Core identification (WG led by Fabien)
  • First source Extraction telecon
  • Sasha will extract sources on the 15 ALMA-IMF fields with getsf. See his presentation at our 2020 consortium meeting.
  • Sylvain will extract sources on the 15 ALMA-IMF fields with GExt2D. See his presentation at our 2020 consortium meeting.
  • For first extraction results, have a look at p20 to p37 of Fabien’s talk at our 2020 consortium meeting. This could help you define paper projects dedicated to cores!
  • For a comparison of extraction results, have a look at talks by Ben and  Yohan at our 2020 consortium meeting
  • A project using Artificial Intelligence to identify cores (and possibly filaments) will be started at IPAG this autumn (Isabelle, Jeff, Estelle, Fred…) 
  • Questions: 
    • Who could run Dendrograms on the 15 fields? 
Answer from Patricio: Andres could. He should contact Fabien to organize the work.
  • What about analysis of synthetic sources injected on maps to assess completeness levels?
Answer from Fred: That’s a task we always do with getsf (see Motte+2018, all Herschel CMF work, Yohan+ in prep.) when it is necessary to discuss completeness levels (for CMF, mass segregation estimates…). The 90% completeness levels vary with the intensity of the background and the clustering of sources. Injection is done 1/ on the real background (initial image minus sources) of each analyzed image and on the complete (with real sources) initial images. Did you have something else in mind?
  • Yohan will present his Python script to investigate and compare cores’ catalogs.


  1. Core extraction strategy (the most important point to discuss today)
Simultaneous extraction or cross-matching of catalogs
Note from Sasha: It is absolutely much better to make simultaneous (multi-wavelength) 
extraction catalogs. Associating two independent catalogs for images with substantially different resolutions is a bad idea. Imagine this: a hires image shows 10 cores and a lores image only has one blob in the same place. Arbitrary association of the blob with just one source from the hires image is the worst idea. It is necessary to deblend the blob into 10 cores using the positions known from the hires image. Otherwise, the measurements will be completely wrong by a very large factor. Matching independent catalogs works only in the simplest (not realistic) case when both images resolve all cores. 
Comment from Fab: the 1mm and 3mm images have the same angular resolution.
  • By default, use Band 6 (1mm) and Band 3 (3mm) but do not ask for 2 detections: 
Best Sensitivity = BSENS for detection and measurement (not used for hot cores)
Cleanest for measurement only
Roberto said that the 3mm band should not be used for driving the detection for evolved regions...
Questions: 
  • What do we do about 3mm-only cores? (largest images, 3mm emissivity less well constrained, ??% more sources in W43-MM2&3)
  • What matching criteria do we use between bands? and between various extraction catalogs?
  • Does it make sense to publish several catalogs? or median size and flux estimates?
Note from Sasha: It is not the best idea to average measured quantities between the catalogs from different codes. This would decrease their accuracy, if one of the codes does measurements more accurately than the other. The question about measurement accuracy can only be answered by running the same benchmark by both codes and by
comparing the measured values with the truth table.
  • What would be the strategy to combine catalogs to improve reliability and uncertainty of the listed sources?
Note from Sasha: Combination of measurements from independent codes is not a good
idea. Much better to use the measurements from a code that gives better accuracy against the truth table of a benchmark and use the other measurement from the second code to assess the total uncertainties of the measured quantities.

  • Relatively-easy improvements
  • Tests on the coherent component of the W43-MM2&3 image (from MnGSeg). Sasha said that at first glance, it does not enlarge the core sample. Yohan will further investigate.
  • Timea proposes to tests the 7M+12M dataset for the accuracy of flux measurements. Patricio: flux of low-mass cores increased with dendrogram extraction. Thomas: just removed weaker sources.
Tor be checked again, at least on a few fields.
  • Test on the images corrected for free-free contamination.
Roberto: first results of free-free estimation using RL’s are here:
https://docs.google.com/document/d/1dxZzeDgzZRRV9oFsjNnNStI79IdXHrp0DnB7X5gvcaU/edit?usp=sharing
However, I don’t think it will get to the accuracy level of being able to run core identification algorithms in images with the free-free subtracted pixel-by-pixel. Rather, it can serve as a map to see where there is significant free-free contamination and estimate the free-free flux in selected areas.
Fabien proposes to use these images as masks (for intermediate regions).
Roberto and Adam say that VLA maps (W51, W43, G351) work better because the emission is optically thin and S/N is larger.
We must investigate if extracting free-free flux at the position of cores is more efficient than measuring 1mm/3mm flux ratios to remove the free-free emission from the thermal dust emission of cores.
Questions:
  • Does it help extracting more cores?
  • Does it help removing artefacts (interferometric artefacts, free-free emission peaks, ...)? or does it add spurious/dubious sources?

  • Longer-term improvements (depending on line cubes)
Tests on the Pure-Continuum image (from e.g. Jordan’s method) with the help of Nathalie and Yohan. Someone interested to join the effort?
Adam and Nathalie said that we need a deep clean on each channel and it is difficult and will take long.
Lee: Pure-continuum image is a dream toward hot cores like those in W43-MM2.
Question: What else?

  1. Mass estimates (new WG led by Alex & Fred)
All the aspects below will/must be further discussed in a separate telecon.
  • Roberto will write a first paper on the ionized gas of all fields. Have a look at his talk at our 2020 consortium meeting.
  • Alex will soon start building up a PPMAP temperature cube for W43-MM2&3. Other fields will follow. 
Your field priorities: ...
Necessary data are:
  • ALMA 12M-only cleanest images at 1mm and 3mm
  • Herschel 5-band images
  • LABOCA 870 micron images
  • SABOCA and/or Artemis 350 micron images
    Other useful data
  • ALMA 7M-only images? at 1mm and 3mm?
  • NOEMA 1mm-3mm images?
  • MAMBO 1mm and/or BOLOCAM 1mm images?
  • (a quick note, if they are of good quality, the combined 12M/7M images can be used by PPMAP. Need to discuss though)
  • Who would like to start investigating thermometers like CH3OH, CH3CN, CH3CCH lines? Timea will start on G338. Someone else? Nathalie has begun with W43-MM1 and can go on with W43-MM2&3.
This information cannot be used for the 1st mass estimates we’ll make but we can expect publishing  more corrected core temperature and thus mass measurements in the future.
  • Who could join Brian to help him get ready to ask for more ammonia data? Gemma? others?
  •  Some ideas to constrain kappa? Someone interested? 1mm/3mm ratios are sometimes strange, maybe due to hot-core emission… To be investigated
  • Fred could present the proxy she proposes for the protostellar luminosity, using the line contamination from COMs.

  1. Foreseen papers on “Cores”
Survey papers:
  • First-pass global CMF: Core extraction process, first core catalog release, all fields (Fabien et al.). Have a look at his talk at our 2020 consortium meeting.
  • Search for prestellar cores: 2 or 3 papers (Sylvain, Timea et al.; Patricio, Ben et al.; maybe Thomas et al. for W43)
    1.  Sylvain & Timea: The idea to detect/reject outflow in a systematic way (before a complete and detailed analysis of CO outflows) for all detected cores to produce a consortium validated list of pre-stellar core and with a clear focus on the one we can be sure they are indeed massive. This would also support follow-ups any other group in the consortium could be interested in. 
    2. Patricio & Ben: ...
    3. Thomas et al. (in prep.) has already done the careful analysis on W43-MM1 (cycle 2 data). Could be extended to W43-MM2...

Papers dedicated to individual (or a couple of) regions:
  • CMF in W43-MM2&3 (young and intermediate): Yohan et al. Have a look at his talk at our 2020 consortium meeting.
  • CMF in G12=W33 (intermediate): Antoine with Mélanie starting her PhD in October.
  • CMF in evolved regions? (e.g. G010, G333?, W49 from Roberto's project): Thomas starting his IRyA postdoc in October.
  • Ionizing feedback in G333.6: Roberto et al.
  • Case-study of G337.92
  • Mass segregation in ??: HongLi
  • Line survey in W51-E: Estrella et al.
  • ?: Patricio
  • others...

  1. Tools/data to be shared among the team
  • Yohan’s Python script to investigate and compare cores’ catalogs
  • LABOCA, SABOCA and Artemis 350 micron data for most regions
  • Other data
  • …


  1. Cores fragmentation/multiplicity (Feasibility to be discussed)
  • Isabelle and Benjamin statistically investigate the fragmentation cascade in Herschel and simulated images. The final goal is to link the multiplicity of cores and the multiplicity of YSOs (as IR detections) and correct CMFs.
  • It requires emission maps sensitive to a large range of scales down to the protostellar binary regime (100 AU). The fragmentation should be investigated at >4 scales separated by a factor of 2, thus over at least a decade. 
  • Questions: 
    • How to add short spacings to continuum ALMA data? Could we use NIKA2, LABOCA, Bolocam data? Shall we use PPMAP to combine them with Herschel data?
    • What high-resolution data could we use? Adam and Patricio have some.

0 Comments



Leave a Reply.

    Archives

    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020

    Categories

    All

    RSS Feed

Happily made by Physique & Chocolat
Proudly powered by Weebly
  • Home
  • Publications
  • Our regions
  • Data
    • Continuum
    • Molecular lines
  • Consortium
  • Contact
  • Internal