Skip to content

Conversation

@Antoplt
Copy link

@Antoplt Antoplt commented Nov 18, 2025

Describe your changes

A mountpoint field was added to the disk data structure in metric.go, so that the disk metrics would contain the disk's mountpoint. Updated the documentation file openapi.yml adding mountpoint to DiskData properties. Added an integration test file disk_test.go in test/integration that ensures that the change in the disk metrics is effective.

Write your issue number after "Fixes "

Fixes #139

Please ensure all items are checked off before requesting a review. "Checked off" means you need to add an "x" character between brackets so they turn into checkmarks.

  • (Do not skip this or your PR will be closed) I deployed the application locally.
  • (Do not skip this or your PR will be closed) I have performed a self-review and testing of my code.
  • I have included the issue # in the PR.
  • I have not included any files that are not related to my pull request
  • I didn't use any hardcoded values (otherwise it will not scale, and will make it difficult to maintain consistency across the application).
  • My PR is granular and targeted to one specific feature.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 18, 2025

Walkthrough

Added an exported Mountpoint string to DiskData, defaulting to "unknown" and populated from partition.Mountpoint during collection; updated OpenAPI DiskData schema and added an integration test validating mountpoint presence.

Changes

Cohort / File(s) Summary
Disk metric struct & collection
internal/metric/disk.go, internal/metric/metric.go
Added exported Mountpoint string to DiskData; set Mountpoint: partition.Mountpoint during partition metric collection; default disk entry now has Mountpoint: "unknown".
API schema
openapi.yml
Added mountpoint property (string) to components.schemas.DiskData with example value.
Integration test
test/integration/disk_test.go
New test TestDiskMetricsContent that collects disk metrics, logs partial errors, skips when only default/unknown metrics exist, and asserts at least one DiskData has a non-empty, non-"unknown" Mountpoint.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Review attention:
    • Verify JSON tag and naming consistency for Mountpoint.
    • Ensure OpenAPI schema matches struct tags and example.
    • Confirm the integration test's handling of "unknown" is appropriate for varied CI environments.

Poem

🐇 I found a path where disks abide,
A mountpoint stitched in data's tide.
From partition soil it hopped anew,
Now each disk wears a label true.
🥕 Hooray — a tiny change, bright and spry!

Pre-merge checks and finishing touches

✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Feat/add mountpoint field disk data' clearly and specifically describes the main change: adding a mountpoint field to disk data structures.
Description check ✅ Passed The PR description is well-detailed, explaining the mountpoint field addition to disk data, the openapi.yml update, and the new integration test, all directly related to the changeset.
Linked Issues check ✅ Passed All code changes directly implement issue #139 requirements: added Mountpoint field to DiskData struct, updated OpenAPI schema, and created tests to verify the implementation exposes disk mountpoints for Checkmate.
Out of Scope Changes check ✅ Passed All changes are scoped to the feature requirement: only the DiskData struct, disk.go metrics collection, openapi.yml documentation, and disk integration tests were modified, with no unrelated files altered.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc89555 and 018d717.

📒 Files selected for processing (1)
  • test/integration/disk_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/integration/disk_test.go

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
test/integration/disk_test.go (1)

11-52: Test implementation is solid; consider stricter validation.

The integration test correctly validates the presence of the Mountpoint field. However, it treats the default value "unknown" as a valid mountpoint. If all disk collections fail and only the default entry is returned, the test will pass without a warning.

Consider this optional enhancement to distinguish real mountpoints from the default:

 		// Validation: The Mountpoint field must not be empty
 		if diskData.Mountpoint == "" {
 			t.Errorf("Disk device %s has an empty Mountpoint field", diskData.Device)
+		} else if diskData.Mountpoint == "unknown" {
+			t.Logf("Disk device %s has default 'unknown' mountpoint (metric collection may have failed)", diskData.Device)
 		} else {
 			foundValidDisk = true
 		}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between abce462 and d6bb4e6.

📒 Files selected for processing (4)
  • internal/metric/disk.go (2 hunks)
  • internal/metric/metric.go (1 hunks)
  • openapi.yml (1 hunks)
  • test/integration/disk_test.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test/integration/disk_test.go (2)
internal/metric/disk.go (1)
  • CollectDiskMetrics (217-283)
internal/metric/metric.go (1)
  • DiskData (66-81)
🔇 Additional comments (4)
internal/metric/metric.go (1)

68-68: LGTM! Field addition is correctly implemented.

The Mountpoint field is properly added to the DiskData struct with appropriate JSON tagging and inline documentation.

openapi.yml (1)

294-296: LGTM! OpenAPI schema correctly reflects the new field.

The mountpoint property is properly defined with an appropriate type and example value that aligns with the Go struct definition.

internal/metric/disk.go (2)

105-105: LGTM! Mountpoint correctly populated from partition data.

The field is properly initialized from partition.Mountpoint, consistent with how other partition attributes are collected.


221-221: LGTM! Appropriate default value for error cases.

Setting Mountpoint to "unknown" in the default disk data is consistent with the Device field and provides clear indication when metrics collection fails.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
test/integration/disk_test.go (1)

22-25: Consider making the test more assertive for new feature validation.

The test silently skips validation when no metrics are found. While this might be appropriate for some restricted environments, it could mask issues with the new mountpoint feature. Since this test validates a newly added field, consider at least one of these approaches:

  1. Fail the test if no metrics are collected on systems where disk access should be available
  2. Use environment variables or build tags to distinguish between restricted and normal environments
  3. Document clearly that this test requires disk access and may skip in restricted environments

Example approach:

 	// Verify that we retrieved at least some metrics
 	if len(metricsSlice) == 0 {
-		t.Log("No disk metrics found. Skipping validation.")
-		return
+		t.Skip("No disk metrics found. This may indicate restricted disk access or a collection failure.")
 	}

Using t.Skip() would make it clearer that the test didn't run, versus passing with no validation.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d6bb4e6 and 9b07a63.

📒 Files selected for processing (1)
  • test/integration/disk_test.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test/integration/disk_test.go (2)
internal/metric/disk.go (1)
  • CollectDiskMetrics (217-283)
internal/metric/metric.go (1)
  • DiskData (66-81)
🔇 Additional comments (3)
test/integration/disk_test.go (3)

1-7: LGTM!

Clean package declaration and imports. The test correctly uses the internal metric package to validate the new Mountpoint field functionality.


27-38: LGTM!

Good defensive type checking with the assertion at lines 31-35, and helpful debug logging at line 38. The foundValidDisk flag properly tracks whether at least one valid disk with a real mountpoint was discovered.


40-48: LGTM!

The mountpoint validation logic correctly handles three cases:

  1. Empty mountpoint → test error (appropriate)
  2. "unknown" mountpoint → log only (appropriate, as it's the default fallback)
  3. Valid mountpoint → mark as valid disk

This aligns well with the implementation where "unknown" is the default value when metric collection fails.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
test/integration/disk_test.go (1)

21-52: Clarify behavior when only the default “unknown” disk is returned

CollectDiskMetrics can return the defaultDiskData entry with Device="unknown" and Mountpoint="unknown" when it can’t gather real disk metrics. In that case len(metricsSlice) > 0, so the t.Skip on Line 23 won’t trigger, and the test will fail via the !foundValidDisk check even though the environment simply has no usable disk metrics.

If you want this integration test to fail only on real regressions (and not on environments that can’t expose disks at all), consider explicitly treating the sentinel as “no real disks” and skipping in that case. For example:

 func TestDiskMetricsContent(t *testing.T) {
 	// Attempt to collect disk metrics using the internal metric package
 	metricsSlice, errs := metric.CollectDiskMetrics()
@@
-	// Verify that we retrieved at least some metrics
-	if len(metricsSlice) == 0 {
-		t.Skip("No disk metrics found. This may indicate restricted disk access or a collection failure.")
-	}
+	// If there are truly no metrics, or only the default "unknown" entry, skip in this environment.
+	if len(metricsSlice) == 0 {
+		t.Skip("No disk metrics found. This may indicate restricted disk access or a collection failure.")
+	}
+
+	if len(metricsSlice) == 1 {
+		if dd, ok := metricsSlice[0].(*metric.DiskData); ok &&
+			dd.Device == "unknown" && dd.Mountpoint == "unknown" {
+			t.Skip("Only default 'unknown' disk metric returned; disk metrics not available in this environment.")
+		}
+	}
@@
 	// If we iterated through metrics but didn't find any valid disk with a mountpoint, log a warning.
 	if !foundValidDisk {
 		t.Error("No valid disks with mountpoints were found. The mountpoint feature may not be working correctly.")
 	}

This keeps the test strict when real disks are present but missing mountpoints, while avoiding spurious failures where the environment cannot provide disk metrics at all.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9b07a63 and fc89555.

📒 Files selected for processing (1)
  • test/integration/disk_test.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test/integration/disk_test.go (2)
internal/metric/disk.go (1)
  • CollectDiskMetrics (217-283)
internal/metric/metric.go (1)
  • DiskData (66-81)
🔇 Additional comments (1)
test/integration/disk_test.go (1)

9-47: End-to-end mountpoint assertions look solid

The test exercises metric.CollectDiskMetrics end-to-end, asserts the slice contains *metric.DiskData, and validates that Mountpoint is neither empty nor "unknown" for at least one disk. This gives good coverage of the new field and will catch regressions in how mountpoints are populated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Access the mountpoint of a disk within Checkmate

1 participant