[1] "The TIFF file does not appear to be a GeoTIFF (no CRS information found)."
NULL
[1] "The TIFF file does not appear to be a GeoTIFF (no CRS information found)."
NULL
The 1960’s images in the folder //Aerial photos//1960s do not contain spatial reference information.
Need interactive map of study area with outlines of aerial photographs on it. Not sure if air photos are geoTIFFS.
[1] "The TIFF file does not appear to be a GeoTIFF (no CRS information found)."
NULL
The 1990’s images in the folder //Aerial photos//1990s do not contain spatial reference information.
Need interactive map of study area with outlines of aerial photographs on it.
[1] "The TIFF file does not appear to be a GeoTIFF (no CRS information found)."
NULL
The 2010’s images in the folder //Aerial photos//2010s do not contain spatial reference information.
Need interactive map of study area with outlines of aerial photographs on it.
Calibrarion folders are:
TT-14232 : Corresponds to images in 2010s folder TT-14315 : Corresponds to images in 2010s folder WF-3162 : Corresponds to images in 1960s folder WF-3380 : Corresponds to images in 1960s folder
1960’s not complete. Images are present for WF-3392 and WF-3617 but no calibration information 1990’s calibration information not in Calibration folder 2010’s calibration information appears complete.
The images in the folder //Aerial photos//2010s do not contain spatial reference information.
Need interactive map of study area with outlines of aerial photographs on it.
The following block draws the basemap with outline of Mauken herding district and the transect points overlaid. Transect points are from two different sources. “transect_points_mauken.shp” has the points that Tim said were used. The other points are possible points.
The folder called “Calibration” has 4 folders within it. Those folders are called:
TT-14232 TT-14315 WF-3162 WF-3380
The contents of each folder differ.
#### TT-14232 Folder Contents: AT FOTOGRAFERING GNSSINS KAMERAKAIBERING
#### TT-14315 Folder Contents: AT FOTOGRAFERING GNSSINS KAMERAKALIBERING
#### WF-3162 Bingo GCP GCP_Validation_Report Inpho PATB Socset-Set Summit ZI
Files:
AT-rapport_WF-3162_Malangshalvøya_1968.pdf (This looks like a report on orthorectification of 1968 fotos). Check with Tim about what it is.
#### WF-3380 AT (Contains similar folders/files to WF-3162, just organized differently) KAMERAKALIBERING
Create a map of the GCPs used in WF-3380.
Reading layer `WF-3380_Balsfjord_1969_Used_GCP' from data source
`/Volumes/T7/Reindeer Commission/Aerial photos/Calibration documentation/WF-3380/AT/GCP/WF-3380_Balsfjord_1969_Used_GCP.shp'
using driver `ESRI Shapefile'
Simple feature collection with 260 features and 4 fields
Geometry type: POINT
Dimension: XY
Bounding box: xmin: 634776.6 ymin: 7661996 xmax: 694101.1 ymax: 7716781
Projected CRS: UTM_Zone_33_Northern_Hemisphere
ℹ tmap mode set to "view".
Create a map of the GCPs used in WF-3162.
SamletMetadata provides footprints of orthophotos. Maybe with feathering.
# Code description written by Gemini AI ----
#' Reads a data file with an unknown number of header lines.
#'
#' This function is designed to read text files where:
#' 1. All initial header/comment lines start with '#'.
#'
#' 2. The line indicating the start of column names starts with '#' and specifically
#' contains the word "Output".
#' 3. The actual column names are on subsequent lines, each starting with '#',
#' until the first line that does not start with '#'.
#' 4. The actual data starts after all header/comment lines.
#'
#' @param filepath A character string specifying the path to the file to read.
#' @param sep The field separator character. Defaults to "," (comma) for CSV-like files.
#' Set to " " for space-separated files, or "" for any whitespace.
#' @param ... Additional arguments to pass to `read.csv` (or `read.table` if `sep` is not ',').
#'
#' @return A data frame containing the data from the file with correct column names.
#' @export
#'
#' @examples
#' # Create a dummy file for demonstration with multi-line column names
#' file_content_multi_line_cols <- c(
#' "# This is a general comment line for metadata.",
#' "# Project: Dynamic Data Processing",
#' "# Report Date: 2024-08-20",
#' "# Output", # This line indicates column names follow
#' "# Record_ID",
#' "# Item_Name",
#' "# Quantity",
#' "# Price_USD",
#' "# Status_Code",
#' "101,Laptop,2,1200.00,A",
#' "102,Mouse,5,25.50,B",
#' "103,Keyboard,3,75.00,A",
#' "104,Monitor,1,300.00,C"
#' )
#' writeLines(file_content_multi_line_cols, "sample_data_multi_line_cols.txt")
#'
#' # Read the file with multi-line column names
#' my_data_multi <- read_file_with_dynamic_header("sample_data_multi_line_cols.txt", sep = ",")
#' print(my_data_multi)
#'
#' # Example with space-separated data and multi-line column names
#' file_content_space_multi_line_cols <- c(
#' "# Sensor Log Data",
#' "# Location: Lab A",
#' "# Output",
#' "# TimeStamp",
#' "# Temperature_C",
#' "# Humidity_Perc",
#' "2024-08-20T10:00:00 25.1 60.3",
#' "2024-08-20T10:01:00 25.3 60.5",
#' "2024-08-20T10:02:00 25.0 60.1"
#' )
#' writeLines(file_content_space_multi_line_cols, "sample_space_multi_line_cols.txt")
#'
#' my_data_space_multi <- read_file_with_dynamic_header("sample_space_multi_line_cols.txt", sep = "")
#' print(my_data_space_multi)
#'
#' # Clean up dummy files (uncomment to run)
#' # unlink("sample_data_multi_line_cols.txt")
#' # unlink("sample_space_multi_line_cols.txt")
#### ----
read_file_with_dynamic_header <- function(filepath, sep = ",", ...) {
# Read all lines of the file
lines <- readLines(filepath)
# Initialize variables to store indices
col_names_marker_index <- NULL
first_data_line_index <- NULL
# Step 1: Find the line indicating the start of column names (e.g., "# Output")
for (i in seq_along(lines)) {
if (startsWith(lines[i], "#") && grepl("Output:", lines[i], fixed = TRUE)) {
col_names_marker_index <- i
break
}
}
# Error handling if the column names marker line is not found
if (is.null(col_names_marker_index)) {
stop("Error: The line indicating column names (starting with '#' and containing 'Output:') was not found.")
}
# Step 2: Find the first line that does not start with '#' (start of actual data)
for (i in seq_along(lines)) {
if (!startsWith(lines[i], "#")) {
first_data_line_index <- i
break
}
}
# Error handling if no data lines are found
if (is.null(first_data_line_index)) {
stop("Error: No data lines found in the file after the header section.")
}
# Step 3: Extract column names from lines between col_names_marker_index
# and first_data_line_index (exclusive of both, as the marker line itself
# just says "Output" and the data line is the start of data)
# Ensure there are lines between the marker and the data
if (first_data_line_index <= col_names_marker_index) {
stop("Error: No column name lines found between 'Output' marker and data start.")
}
column_name_lines <- lines[(col_names_marker_index + 1):(first_data_line_index - 1)]
# Extract the text after '#' from each column name line
# and remove leading/trailing whitespace
column_names <- trimws(sub("^#\\s*", "", column_name_lines))
# Remove any column names that are ""
column_names <- column_names[!(column_names=="")]
# column_names <- column_names %>%
# dplyr::filter(!any(column_names== ""))
# Error handling if no column names were extracted
if (length(column_names) == 0 || any(column_names == "")) {
stop("Error: No valid column names extracted. Ensure each column name line is not empty after '#' and matches the expected format.")
}
# Calculate the number of lines to skip for read.csv/read.table
# This skips all lines up to, but not including, the first data line.
lines_to_skip <- first_data_line_index - 1
# Read the data using read.csv or read.table based on separator
if (sep == ",") {
data <- read.csv(
filepath,
skip = lines_to_skip,
header = FALSE, # We are providing column names manually
col.names = column_names,
comment.char = "#", # Treat '#' as comment character within the data section if any appear
stringsAsFactors = FALSE, # Prevent automatic conversion to factors
... # Pass any additional arguments
)
} else {
data <- read.table(
filepath,
skip = lines_to_skip,
header = FALSE, # We are providing column names manually
col.names = column_names,
sep = sep,
fill=TRUE,
comment.char = "#", # Treat '#' as comment character within the data section if any appear
stringsAsFactors = FALSE, # Prevent automatic conversion to factors
... # Pass any additional arguments
)
}
return(data)
}
sharedDrivePath <- "//Volumes//T7//Reindeer Commission//"
specificPath <- paste0(sharedDrivePath, "Aerial photos//Calibration documentation//TT-14315//GNSSINS//")
allFolders <- list.files(specificPath, pattern=".txt", full.names=TRUE, recursive=TRUE)
#Read all text files into a list of objects
gcpList <- map(allFolders, read_file_with_dynamic_header, sep="")
gcpDF <- bind_rows(gcpList)
gcpSF <- st_as_sf(gcpDF,
coords = c("Longitude..Degrees.", "Latitude..Degrees."),
crs = 4326)
gcpSF.14315 <- gcpSF
gcpSF.minimal <- gcpSF[1:10,]
map.1 <- tm_shape(gcpSF) +
tm_dots(col="black", fill="blue", size=0.7, shape = 21) +
tm_basemap(server = "OpenStreetMap") +
tm_title("TT-14315 GCP")
gcpMap <- map.1
gcpMapsharedDrivePath <- "//Volumes//T7//Reindeer Commission//"
specificPath <- paste0(sharedDrivePath, "Aerial photos//Calibration documentation//TT-14232//GNSSINS//")
allFolders <- list.files(specificPath, pattern=".txt", full.names=TRUE, recursive=TRUE)
#Read all text files into a list of objects
gcpList <- map(allFolders, read_file_with_dynamic_header, sep="")
gcpDF <- bind_rows(gcpList)
gcpSF <- st_as_sf(gcpDF,
coords = c("Longitude..Degrees.", "Latitude..Degrees."),
crs = 4326)
map.1 <- tm_shape(gcpSF) +
tm_dots(col="black", fill="blue", size=0.7, shape = 21) +
tm_shape(gcpSF.14315) +
tm_dots(col="black", fill="red", size=0.7, shape=21) +
tm_basemap(server = "OpenStreetMap") +
tm_title("TT-14232 and TT-14315 GNSS points combined")
gcpMap <- map.1
gcpMapThe two flights TT-14315 and TT-14332 have control points in an excel file located in the AT folder within each top-level folder (named for the mission).
Reading layer `Transects_1915_gn' from data source
`/Volumes/T7/Reindeer Commission/Aerial photos/DMC_GIS/Transects_1915_gn.shp'
using driver `ESRI Shapefile'
Simple feature collection with 734 features and 15 fields
Geometry type: LINESTRING
Dimension: XYZ
Bounding box: xmin: 17.34645 ymin: 68.34459 xmax: 21.05244 ymax: 69.92768
z_range: zmin: 0 zmax: 0
Geodetic CRS: WGS 84
ℹ tmap mode set to "view".
── tmap v3 code detected ───────────────────────────────────────────────────────
[v3->v4] `tm_tm_lines()`: migrate the argument(s) related to the scale of the
visual variable `col` namely 'palette' (rename to 'values') to col.scale =
tm_scale(<HERE>).
[v3->v4] `tm_lines()`: use `col_alpha` instead of `alpha`.
Registered S3 method overwritten by 'jsonify':
method from
print.json jsonlite
sharedDrivePath <- "//Volumes//T7//Reindeer Commission//"
specificPath <- paste0(sharedDrivePath, "Aerial photos//Calibration documentation//TT-14315//AT")
fileName <- list.files(specificPath, pattern=".xlsx", full.names=TRUE, recursive=TRUE)
#browser()
fileName <- fileName[length(fileName)] # this is a hack to retrieve the real file name, not the lock file with a ~ in the fileName.
sheets <- excel_sheets(fileName)
combinedData <- NULL
for(i in 1:length(sheets)){
#for(i in 2:2){
print(paste0("Processing Sheet #", i))
tempData <- read_excel_side_by_side(fileName, "Modell", "Mean", sheet=i, n_check_rows = 500)
tempData <- tempData %>%
mutate(BlockNum=sheets[i])
combinedData <- combinedData %>%
bind_rows(tempData)
}[1] "Processing Sheet #1"
[1] "Processing Sheet #2"
Warning in read_excel_side_by_side(fileName, "Modell", "Mean", sheet = i, :
Column count mismatch for block starting at col 1. Inferred width: 13, Read
columns: 12. Columns might be misaligned or incomplete. Attempting best-effort
rename.
[1] "Processing Sheet #3"
[1] "Processing Sheet #4"
[1] "Processing Sheet #5"
# Make combinedData into a shapefile and map
# Set appropriate CRS based on information in reports. Filename=ATrapport_OmløpsfotoTroms2016_14315_Revidert.pdf
# The EPSG code for EUREF-EU-89 / UTM zone 33N is EPSG:25833. This is a projected coordinate reference # system designed for engineering, topographic, and large-scale mapping across Europe, with a focus on # the region between 12°E and 18°E longitude, covering countries like Germany, Norway, and Denmark.
combinedData_sf14315 <- st_as_sf(combinedData, coords = c("malt_e", "malt_n"), crs = 25833)
# interactive_map <- leaflet(combinedData_sf) %>%
# addTiles() %>%
# # addPolygons(
# # fillColor = "blue",
# # fillOpacity = 0.7,
# # color = "white",
# # weight = 2,
# # opacity = 1,
# # highlightOptions = highlightOptions(
# # weight = 5,
# # color = "#666",
# # dashArray = "",
# # fillOpacity = 0.9,
# # bringToFront = TRUE
# # ),
# # label = ~name,
# # popup = ~paste0("<b>ID:</b> ", id, "<br>",
# # "<b>Name:</b> ", name, "<br>",
# # "<b>Value:</b> ", value)
# # ) %>%
# addMarkers(
# lng = 11, lat = 51,
# popup = "A marker in the center of Europe!"
# ) %>%
# addScaleBar(position = "bottomleft") %>%
# setView(lng = 11, lat = 51, zoom = 7)
#
# # Display the map
# interactive_map[1] "Processing Sheet #1"
Warning in read_excel_side_by_side(fileName, "Modell", "Mean", sheet = i, :
Column count mismatch for block starting at col 1. Inferred width: 19, Read
columns: 18. Columns might be misaligned or incomplete. Attempting best-effort
rename.
Warning in read_excel_side_by_side(fileName, "Modell", "Mean", sheet = i, :
Column count mismatch for block starting at col 21. Inferred width: 15, Read
columns: 14. Columns might be misaligned or incomplete. Attempting best-effort
rename.
maukenOnlyMapGNSS <- tm_shape(herdingDistrictMauken) +
tm_polygons(
col = "goldenrod", # Fill color for district
lwd = 1, # Line width for district
fill_alpha = 0.4 # Transparency so you can see through it if needed
) +
tm_basemap(server = "OpenStreetMap") +
tm_title("Ground Control Points Used for WF Photos with Mauken Outline")
map.1 <- maukenOnlyMap +
tm_shape(combinedData_sf14315) +
tm_dots(col="black", fill="blue", size=0.7, shape = 21)
mapCombined <- map.1
mapCombinedmaukenOnlyMapGNSS <- tm_shape(herdingDistrictMauken) +
tm_polygons(
col = "goldenrod", # Fill color for district
lwd = 1, # Line width for district
fill_alpha = 0.4 # Transparency so you can see through it if needed
) +
tm_basemap(server = "OpenStreetMap") +
tm_title("Ground Control Points Used for WF Photos with Mauken Outline")
map.1 <- maukenOnlyMap +
tm_shape(combinedData_sf14232) +
tm_dots(col="black", fill="blue", size=0.7, shape = 21)
mapCombined <- map.1
mapCombinedinteractive_map <- leaflet(combinedData_sf14232) %>%
addTiles() %>%
# addPolygons(
# fillColor = "blue",
# fillOpacity = 0.7,
# color = "white",
# weight = 2,
# opacity = 1,
# highlightOptions = highlightOptions(
# weight = 5,
# color = "#666",
# dashArray = "",
# fillOpacity = 0.9,
# bringToFront = TRUE
# ),
# label = ~name,
# popup = ~paste0("<b>ID:</b> ", id, "<br>",
# "<b>Name:</b> ", name, "<br>",
# "<b>Value:</b> ", value)
# ) %>%
addMarkers(
lng = 11, lat = 51,
popup = "A marker in the center of Europe!"
) %>%
addScaleBar(position = "bottomleft") %>%
setView(lng = 11, lat = 51, zoom = 7)
# Display the map
interactive_mapIdentify transect locations based on points already determined. Transects should be belt transects that go from valley bottom to ridgetop (to start with). Possibly 100m wide and 50 m up/down. Check different transect widths and different vertical division lengths. Use sigmoid wave method to identify the position of treeline in different years to find the magnitude of change. Check Baider Find footprints of 2010’s orthophotos for the Mauken herding district
Identify GCP locations on orthophoto to match with the 1960’s data from Mauken
Find footprints for 1960’s imagery.
Use RayShader to overlay air photos (2010’s) on the DTM