This section addresses a number of Frequently Asked Questions about MaNGA data, that have not been addressed in the MaNGA tutorials. If you have any general questions about SDSS data, then please visit the SDSS help page. If your question is not listed below or on the SDSS help page, then please contact the SDSS helpdesk, and your question will be directed to a survey expert in the appropriate area.
Where can I find general information about the MaNGA sample?
The DRPALL file contains basic information about all objects targeted by MaNGA, including coordinates, redshifts, and basic photometry.
Will my favourite galaxy be observed by MaNGA for a future data release?
Please visit our Field Layout Forecast Page for a description of our field lay choices, and the probabilities that selected fields will be observed by MaNGA.
How can I identify and select galaxies or standard star targets?
The DRPALL catalog contains information for both galaxies and standard star observations. To select the galaxies, use the MANGA_TARGET1/2/3 (abbreviated in DRPALL and individual FITS headers as MNGTARG1/2/3) bitmasks. Any object with a non-zero MNGTARG1 is a galaxy in the MaNGA main galaxy program. Objects with non-zero MNGTARG2 are non-galaxy objects (stars or sky targets). MNGTARG3 identifies ancillary program objects (usually also galaxies). For more information, visit the bitmasks page.
How can I tell the target class of an observed galaxy?
Use the MNGTARG1 bitmask in the DRPALL file. The possible bits are:# MaNGA targeting bitmask for galaxy targets maskbits MANGA_TARGET1 0 NONE " Unused" maskbits MANGA_TARGET1 1 PRIMARY_PLUS_COM " March 2014 commissioning" maskbits MANGA_TARGET1 2 SECONDARY_COM " March 2014 commissioning" maskbits MANGA_TARGET1 3 COLOR_ENHANCED_COM " March 2014 commissioning" maskbits MANGA_TARGET1 4 PRIMARY_v1_1_0 " First tag, August 2014 plates" maskbits MANGA_TARGET1 5 SECONDARY_v1_1_0 " First tag, August 2014 plates" maskbits MANGA_TARGET1 6 COLOR_ENHANCED_v1_1_0 " First tag, August 2014 plates" maskbits MANGA_TARGET1 7 PRIMARY_COM2 " July 2014 commissioning" maskbits MANGA_TARGET1 8 SECONDARY_COM2 " July 2014 commissioning" maskbits MANGA_TARGET1 9 COLOR_ENHANCED_COM2 " July 2014 commissioning" maskbits MANGA_TARGET1 10 PRIMARY_v1_2_0 " " maskbits MANGA_TARGET1 11 SECONDARY_v1_2_0 " " maskbits MANGA_TARGET1 12 COLOR_ENHANCED_v1_2_0 " " maskbits MANGA_TARGET1 13 FILLER " Filler targets" maskbits MANGA_TARGET1 14 RETIRED " Retired bit; do not use"
You can find all objects that meet a particular target class using the following IDL snippet:drpall=mrdfits('drpall-v1_5_4.fits',1) indx=where((drpall.mngtarg1 and 2^10L+2^11L+2^12L) ne 0)
This would return the indices to every row in the drpall table for which either the PRIMARY_v1_2_0, SECONDARY_v1_2_0, or COLOR_ENHANCED_v1_2_0 bits are set.
The version numbers and dates above describe versions of the sample selection, which was tagged as v1_2_0 in September 2015. Please refer to the bitmasks page for links to the most up-to-date bitmask definitions.
Why do you output IVAR (inverse variance) instead of errors?
IVAR (inverse variance) offers convenient properties; bad values with effectively infinite errors are represented simply by IVAR=0. Quite often, you do not even need to worry about the mask or the bad pixels since they get automatically excluded if their IVARs are set to zero. For example, when you do χ2 fitting, you multiply (signal-model)2 with IVAR, you don’t need to exclude points with invalid measurements as long as they have IVAR set to 0, since their contribution will be zero in the χ2.
When you compute S/N, you just multiply signal with Sqrt(IVAR), all points with IVAR=0 get S/N of zero and will be excluded when you select high S/N points. You do not need to exclude them explicitly in the code. For these cases, if you use ERR and have a flag value, you would first have to exclude those points before doing the χ2 or S/N calculation.
Also, if you were doing IVAR-weighted average, the IVAR of the result is the sum of all the IVARs. It’s quite convenient for error propagation.
Plus, zero IVAR is much more elegant than having -999 or Inf for ERR.
Finally, for the spectra, all previous SDSS/BOSS spectra (which represent the majority of all astronomical spectra publicly released, ever) have used IVAR instead of ERR.
How do I map RA and Dec onto the pixel locations?
The headers of the datacubes contain the WCS astrometry information. Routines like adxy.pro from the Goddard Library in IDL can be used to parse this information and convert between pixel and celestial coordinates.
In the RSS files, what is the purpose of HDU #9 [XPOS] and #10 [YPOS] and why is the format in [NWAVE × (NFIBER*NEXP)]?
These extensions report the actual locations of the circular fibers within the IFU on each target (relative to the target center) for every exposure that was obtained. Because of differential atmospheric refraction (DAR), the effective location of each aperture changes smoothly with wavelength, and this explains the format of this information.
It is this wavelength-dependent array of fiber aperture measurements that is used to construct the 3d arrays given in the data cubes.
How do I check the data quality of my MaNGA cube?
Not all data are of the same quality. On very rare occasions an IFU can fall out of the plate for a series of exposures, in which case the sum total integration time on that object can be less than usual. Similarly, sometimes the pipeline detects potential inconsistencies in the flux calibration, etc, that the end user should be aware of. These are encapsulated in the DRP3QUAL bitmask in the header of each cube/rss file and listed as a column in the drpall summary file. Bitmask values for DR13 are described here ; loosely speaking, the CRITICAL flag is the most important one to pay attention to.
Why do some spaxels at the edges have unphysical jumps in flux or no flux coverage in certain wavelength ranges?
It should be the case that in most cases where the flux jumps to zero, the inverse variance is also zero (i.e., infinite error vector). There should also be a mask bit set in the ‘MASK’ extension of the cubes. Anything with bit 10 (i.e., 210=1024) should not be used for science.
The reason for such jumps is that we are getting close enough to the edge of the IFU that small changes in which fibers are being combined into the spaxel value in the cube between one wavelength and the next can cause large changes in the resulting flux. You can sometimes see a gradual decline in number of fibers contributing to a problem area as a falloff in the inverse variance vector. At some point (<30% of nominal depth) the pipeline starts flagging LOWCOV and DONOTUSE to warn that this effect is likely.
How do I weight galaxies to recover a luminosity-independent volume-limited sample?
The selection methodology applied to the MaNGA sample means that weights must be applied to any statistical population analysis to correct for the selection function. The main MaNGA sample galaxies are selected to lie within a redshift range, zmin < z < zmax, that depends on absolute i-band magnitude (Mi) in the case of the Primary and Secondary samples, and absolute i-band magnitude and NUV-i color in the case of the Primary+ (Primary + Color-Enhanced (CE)) sample. zmin and zmax are chosen to yield both the same number density of galaxies and angular size distributions, matched to the IFU sizes, at all absolute i-band magnitudes (or magnitudes and colors for the Primary+ sample). This results in lower and narrower redshift ranges for less luminous galaxies and higher and wider redshift ranges for more luminous galaxies.
At a given Mi (or Mi and NUV-i color for the Primary+ sample) the sample is effectively volume limited in that all galaxies with zmin(Mi) < z < zmax(Mi) are targeted irrespective of their other properties. However, that volume varies with Mi. Therefore in any analysis of the properties of MaNGA galaxies as a function of anything other than Mi we must correct for this varying selection volume, Vs(Mi) (the volume with zmin(Mi) < z < zmax(Mi)). The simplest approach is just to correct the galaxies back to a volume limited sample by applying a weight (W) to each galaxy in any calculation such that W = Vr/Vs where Vr is an arbitrary reference volume.
If you look in the target catalog in the CAS you will find ZMIN, ZMAX, SZMIN, SZMAX, EZMIN and EZMAX for each galaxy. These are the minimum and maximum redshift each galaxy could have been observed over for the Primary, Secondary and Primary+ samples respectively. So for a given galaxy just convert the appropriate (given which sample it’s in) max and min z to a volume (Vs) and the weight is 1/Vs. Or Vr/Vs where Vr is some reference volume.
One additional complication arrises if you wish to combine the Primary(+) and Secondary sample galaxies together. The Secondary sample was random sampled to the appropriate surface density so that the 2/3 to 1/3 Primary to Secondary sample split would be achieved, with a random sampling rate of 67.1%. However, we allow unallocated IFUs that cannot be allocated to any other targets to be allocated to Secondaries not included in the random sampling, which effectively changes the sampling rate to 76.9%. These additional galaxies will most likely be on plates with a low surface densities of targets and so could be biased towards lower density regions. If you were concerned about such a bias the safest approach is to ignore these additional galaxies and so restrict the Secondary sample to galaxies with RANFLAG = 1 and then multiply the Secondary weights by 1/0.671 to reflect the random sampling rate. If you are not concerned then you can use all the Secondaries multiplying the weight by 1/0.769.
Important: Due to a very recently discovered bug in the target selection code the Secondary random sampling is in fact not truly random but samples in such a way as to make the number density distribution flat as a function of stellar mass. This is a small change, since the density distribution was already quite close to being flat with stellar mass, but does have some potential consequences when calculating any weights for samples that include the Secondary sample. Instead of using the constant 1/0.671 (or 1/0.769) factor described above you should really use the sampling probability that was applied to each individual galaxy, which depends on stellar mass. This probability is not included in DR13. We will make it available soon along with a full description of how to calculate the weights using it (see the caveats page and Wake et al. in prep). In the mean time ignoring the individual sampling probability and using the weights as described above should be sufficiently accurate for most purposes. Systematic error due to this effect should be smaller than the sample variance with the current sample size.
Note: You should never use the Color-Enhanced sample on it’s own for statistical population studies. There are regions of color-mag space that are not populated in the CE sample. If you want to include the CE sample you should use the Primary+ sample (Primary + CE) where all regions of our nominal color-mag space are sampled. EZMIN & EZMAX are defined for all galaxies in the Primary+ sample.