-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Performance/file parse and mount #6975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Performance/file parse and mount #6975
Conversation
- Introduced BatchAggregator to handle IPC events in batches, reducing Redux dispatch overhead during collection mounting. - Updated collection watcher to utilize batch processing for adding files and directories, improving UI performance. - Implemented ParsedFileCacheStore using LMDB for efficient caching of parsed file content, enhancing loading speed and reducing redundant parsing. - Adjusted collection slice to support batch addition of items, minimizing re-renders and improving state management. - Updated relevant components to reflect changes in loading states and collection data handling.
- Introduced a new Cache component in the Preferences section to display cache statistics and allow users to purge the cache. - Implemented IPC handlers for fetching cache stats and purging the cache in the Electron main process. - Added styled components for better UI presentation of cache information. - Updated Preferences component to include a new tab for cache management.
WalkthroughIntroduces LMDB-backed file caching and IPC event batching to optimize collection file handling. Adds cache management UI in Preferences with stats and purge functionality. Refactors collection watcher to be asynchronous, batch file system events, and cache parsed .bru files. Updates collection loading indicators to use native isLoading property. Changes
Sequence DiagramsequenceDiagram
participant FW as File Watcher
participant BA as Batch Aggregator
participant Main as Electron Main
participant IPC as IPC Channel
participant Renderer as Renderer
participant Redux as Redux Store
FW->>BA: addFile/addDir/change events
BA->>BA: queue events (time: 200ms, size: 300)
BA->>BA: flush triggered (timeout/manual)
BA->>Main: webContents.send batch
Main->>IPC: main:collection-tree-batch-updated
IPC->>Renderer: dispatch event with batch payload
Renderer->>Redux: collectionBatchAddItems(items)
Redux->>Redux: group by collectionUid<br/>process bulk items single pass<br/>update folder hierarchies
Redux-->>Renderer: state updated
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…/file-parse-and-mount
…veral Babel dependencies
- Increased DISPATCH_INTERVAL_MS from 150ms to 200ms for better timing control. - Adjusted MAX_BATCH_SIZE from 200 to 300 items to enhance batch processing efficiency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@packages/bruno-app/src/components/Sidebar/Collections/Collection/index.js`:
- Line 64: In the createCollection reducer, initialize the missing boolean flag
by setting collection.isLoading = false alongside the other property
initializations (e.g., collection.id, collection.name, etc.); this ensures
middleware checks that use !collection.isLoading behave correctly and the
Collection component spinner can render.
In `@packages/bruno-app/src/providers/App/useIpcEvents.js`:
- Around line 131-137: The batch path dispatch for unlink inside
individualItems.forEach is firing immediately; update the handler for eventType
'unlink' to delay calling dispatch(collectionUnlinkFileEvent({ file: payload }))
by 100ms to match the single-event handler behavior — e.g., wrap the dispatch in
a 100ms setTimeout or use an await sleep(100) if the enclosing function can be
async; adjust individualItems.forEach (eventType/payload handling) accordingly
so unlink events use the same 100ms delay while keeping change and unlinkDir
unchanged.
In `@packages/bruno-app/src/providers/ReduxStore/slices/collections/index.js`:
- Around line 2709-2819: The batch reducer must preserve the transient
folder/file flags that single-item reducers set via state.tempDirectories: when
creating or updating folder objects in the directories loop (referencing
variables directories, childItem, dir.meta.uid) and when creating/updating file
objects in the files loop (referencing files, currentSubItems, file.data.uid and
file.meta.uid), check state.tempDirectories for the corresponding uid
(dir.meta.uid or file.data.uid) and apply the same transient marker/behavior
used by the single-item reducers (e.g., set the same transient field or metadata
on childItem or new file object and clean up state.tempDirectories if the
single-item logic removes it). Ensure both code paths (new creation and
existing-item update) mirror the single-item reducer’s handling of
state.tempDirectories so transient requests are preserved.
In `@packages/bruno-electron/package.json`:
- Around line 75-76: The electron build config in package.json needs to unpack
LMDB native binaries so they aren't loaded from inside the asar; update the
electron-builder config (the JSON object that contains "asar") to either set
"asar": false or add an "asarUnpack" entry that matches LMDB packages (e.g.,
include patterns for "node_modules/lmdb/**" and "node_modules/@lmdb/**"); modify
the package.json electron-builder section accordingly so the LMDB pre-built
binaries are extracted at build time.
In `@packages/bruno-electron/src/app/collection-watcher.js`:
- Around line 341-342: The batch aggregator is being shared across collections
because getAggregator is called without the collectionUid; update all call sites
that create the aggregator (e.g., where batchAggregator is assigned) to pass the
collectionUid into getAggregator so the key generation via
getAggregatorKey(collectionUid, ...) isolates queues per collection; search for
invocations of getAggregator (and places using batchAggregator) and add the
collectionUid argument to each to prevent cross-collection batching and
incorrect flush timing.
In `@packages/bruno-electron/src/store/parsed-file-cache.js`:
- Around line 219-236: The invalidateDirectory implementation builds a prefix
that can collide with sibling directory names; change how prefix is constructed
in invalidateDirectory so it includes a trailing separator after dirPath (e.g.,
append the same separator used between collectionPath and dirPath) and use that
normalized prefix for both the getRange start and the startsWith check to ensure
only children of the exact directory are matched (update variable prefix and any
comparisons in invalidateDirectory accordingly).
🧹 Nitpick comments (2)
packages/bruno-app/src/components/Preferences/Cache/StyledWrapper.js (1)
3-65: Add a brief JSDoc forStyledWrapper.Helps document the styling scope and intent for the Cache preferences panel.
As per coding guidelines: Add JSDoc comments to abstractions for additional details.♻️ Suggested change
+/** + * Styled wrapper for the Preferences > Cache panel. + * Defines themed typography, stats layout, and purge button styles. + */ const StyledWrapper = styled.div`packages/bruno-app/src/components/Preferences/Cache/index.js (1)
5-87: Add a brief JSDoc forCache.This is a new exported UI abstraction; a short doc helps future maintainers.
As per coding guidelines: Add JSDoc comments to abstractions for additional details.♻️ Suggested change
+/** + * Preferences > Cache panel. Fetches cache stats and exposes a purge action. + */ const Cache = () => {
| const [isKeyboardFocused, setIsKeyboardFocused] = useState(false); | ||
| const dispatch = useDispatch(); | ||
| const isLoading = areItemsLoading(collection); | ||
| const isLoading = collection.isLoading; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for isLoading usage in collections-related files
rg -n "isLoading" packages/bruno-app/src --type js --type jsx --type ts --type tsx -C 2Repository: usebruno/bruno
Length of output: 85
🏁 Script executed:
# Look specifically at collection reducers and actions
fd -e js -e jsx -e ts -e tsx | xargs grep -l "collection" | grep -i "reducer\|action" | head -20Repository: usebruno/bruno
Length of output: 961
🏁 Script executed:
# Search for where collection objects are created/initialized
rg -n "collection\s*=" packages/bruno-app/src -A 3 -B 3 --type js --type jsx --type ts --type tsx | head -100Repository: usebruno/bruno
Length of output: 85
🏁 Script executed:
# Search for isLoading without problematic file type flags
rg -n "isLoading" packages/bruno-app/src -C 2 | head -150Repository: usebruno/bruno
Length of output: 14552
🏁 Script executed:
# Examine the collections reducer
cat packages/bruno-app/src/providers/ReduxStore/slices/collections/reducer.jsRepository: usebruno/bruno
Length of output: 163
🏁 Script executed:
# Examine the collections actions
cat packages/bruno-app/src/providers/ReduxStore/slices/collections/actions.jsRepository: usebruno/bruno
Length of output: 50370
🏁 Script executed:
# Find the collections reducer/slice file
fd -name "*collections*" -path "*ReduxStore/slices*" -type fRepository: usebruno/bruno
Length of output: 288
🏁 Script executed:
# Search for initial state of collections
rg -n "initialState\|isLoading.*false\|isLoading.*true" packages/bruno-app/src/providers/ReduxStore/slices/collections/ -C 2Repository: usebruno/bruno
Length of output: 40
🏁 Script executed:
# Look for collection object creation with new/spread syntax
rg -n "collections\s*:\s*\{|collection\s*:\s*\{|\.\.\.\s*collection" packages/bruno-app/src/providers/ReduxStore/slices/collections/index.js -C 3 | head -80Repository: usebruno/bruno
Length of output: 40
🏁 Script executed:
# Find collections-related reducer files
find packages/bruno-app/src/providers/ReduxStore/slices -name "*collection*" -type fRepository: usebruno/bruno
Length of output: 40
🏁 Script executed:
# Look for the collections slice/reducer file and read its beginning to find initialState
cat packages/bruno-app/src/providers/ReduxStore/slices/collections/index.js | head -200Repository: usebruno/bruno
Length of output: 6449
🏁 Script executed:
# Search for where collection objects are created and if isLoading is initialized
rg "isLoading\s*[:=]" packages/bruno-app/src/providers/ReduxStore/slices/collections/Repository: usebruno/bruno
Length of output: 186
🏁 Script executed:
# Search for where collections are loaded/created from file system or IPC
rg "createCollection|ipcRenderer.invoke.*load.*collection" packages/bruno-app/src -C 3 | head -150Repository: usebruno/bruno
Length of output: 14655
🏁 Script executed:
# Look for initialization of collections when they're loaded
rg "isLoading\s*=" packages/bruno-app/src -B 2 -A 2 | grep -i collection | head -80Repository: usebruno/bruno
Length of output: 2552
🏁 Script executed:
# Check if collection objects from IPC/preload have isLoading
rg "collection\s*=" packages/bruno-app/src/providers/ReduxStore -C 2 | head -100Repository: usebruno/bruno
Length of output: 11017
🏁 Script executed:
# Verify the Collection component and how isLoading is used with undefined
cat packages/bruno-app/src/components/Sidebar/Collections/Collection/index.js | grep -A 5 -B 5 "isLoading"Repository: usebruno/bruno
Length of output: 1088
Initialize collection.isLoading to false in the createCollection reducer.
The field is never initialized when collections are created, leaving it undefined. This causes middleware checks (!collection.isLoading) to evaluate incorrectly and may prevent the spinner from rendering in the Collection component. Add collection.isLoading = false; in the createCollection reducer alongside the other property initializations.
�� Prompt for AI Agents
In `@packages/bruno-app/src/components/Sidebar/Collections/Collection/index.js` at
line 64, In the createCollection reducer, initialize the missing boolean flag by
setting collection.isLoading = false alongside the other property
initializations (e.g., collection.id, collection.name, etc.); this ensures
middleware checks that use !collection.isLoading behave correctly and the
Collection component spinner can render.
| // Process remaining items individually (these are typically rare during mount) | ||
| individualItems.forEach(({ eventType, payload }) => { | ||
| if (eventType === 'change') { | ||
| dispatch(collectionChangeFileEvent({ file: payload })); | ||
| } else if (eventType === 'unlink') { | ||
| dispatch(collectionUnlinkFileEvent({ file: payload })); | ||
| } else if (eventType === 'unlinkDir') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file and see the full context around lines 131-137
wc -l packages/bruno-app/src/providers/App/useIpcEvents.jsRepository: usebruno/bruno
Length of output: 114
🏁 Script executed:
# Read the file to understand the full context
cat -n packages/bruno-app/src/providers/App/useIpcEvents.js | sed -n '100,160p'Repository: usebruno/bruno
Length of output: 2783
🏁 Script executed:
# Look for the single-event unlink handler to find the 100ms delay mentioned
rg -n "unlink" packages/bruno-app/src/providers/App/useIpcEvents.js -B2 -A2Repository: usebruno/bruno
Length of output: 1470
🏁 Script executed:
# Look for _collectionTreeBatchUpdated to understand the batch source
rg -n "_collectionTreeBatchUpdated\|collectionTreeBatchUpdated" packages/bruno-app/src/providers/App/useIpcEvents.js -B3 -A3Repository: usebruno/bruno
Length of output: 40
🏁 Script executed:
# Search for setTimeout patterns related to unlink in this file
rg -n "setTimeout.*unlink\|unlink.*setTimeout" packages/bruno-app/src/providers/App/useIpcEvents.js -B2 -A2Repository: usebruno/bruno
Length of output: 40
🏁 Script executed:
# Look for context around the single-event unlink handler to understand why the delay exists
cat -n packages/bruno-app/src/providers/App/useIpcEvents.js | sed -n '60,100p'Repository: usebruno/bruno
Length of output: 1274
🏁 Script executed:
# Check if there's any documentation about batch handling or the delay reason
rg -n "rename\|race\|timing\|delay" packages/bruno-app/src/providers/App/useIpcEvents.js -iRepository: usebruno/bruno
Length of output: 40
🏁 Script executed:
# Check if there are any comments explaining batch vs single event differences
cat -n packages/bruno-app/src/providers/App/useIpcEvents.js | sed -n '50,150p'Repository: usebruno/bruno
Length of output: 3964
Add 100ms delay to batch unlink dispatch to match single-event behavior.
The single-event handler delays unlink by 100ms (lines 79-86), but the batch handler dispatches it immediately (lines 135-136). Since batches can include unlink events (line 116), this creates inconsistent timing that could reintroduce race conditions. Apply the same delay in the batch path to maintain consistency.
🔧 Suggested fix
} else if (eventType === 'unlink') {
+ setTimeout(() => {
dispatch(collectionUnlinkFileEvent({ file: payload }));
+ }, 100);
} else if (eventType === 'unlinkDir') {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Process remaining items individually (these are typically rare during mount) | |
| individualItems.forEach(({ eventType, payload }) => { | |
| if (eventType === 'change') { | |
| dispatch(collectionChangeFileEvent({ file: payload })); | |
| } else if (eventType === 'unlink') { | |
| dispatch(collectionUnlinkFileEvent({ file: payload })); | |
| } else if (eventType === 'unlinkDir') { | |
| // Process remaining items individually (these are typically rare during mount) | |
| individualItems.forEach(({ eventType, payload }) => { | |
| if (eventType === 'change') { | |
| dispatch(collectionChangeFileEvent({ file: payload })); | |
| } else if (eventType === 'unlink') { | |
| setTimeout(() => { | |
| dispatch(collectionUnlinkFileEvent({ file: payload })); | |
| }, 100); | |
| } else if (eventType === 'unlinkDir') { |
�� Prompt for AI Agents
In `@packages/bruno-app/src/providers/App/useIpcEvents.js` around lines 131 - 137,
The batch path dispatch for unlink inside individualItems.forEach is firing
immediately; update the handler for eventType 'unlink' to delay calling
dispatch(collectionUnlinkFileEvent({ file: payload })) by 100ms to match the
single-event handler behavior — e.g., wrap the dispatch in a 100ms setTimeout or
use an await sleep(100) if the enclosing function can be async; adjust
individualItems.forEach (eventType/payload handling) accordingly so unlink
events use the same 100ms delay while keeping change and unlinkDir unchanged.
| // Process directories first to ensure folder structure exists | ||
| const directories = collectionItems.filter((i) => i.eventType === 'addDir'); | ||
| const files = collectionItems.filter((i) => i.eventType === 'addFile'); | ||
|
|
||
| // Add directories | ||
| for (const { payload: dir } of directories) { | ||
| const subDirectories = getSubdirectoriesFromRoot(collection.pathname, dir.meta.pathname); | ||
| let currentPath = collection.pathname; | ||
| let currentSubItems = collection.items; | ||
| for (const directoryName of subDirectories) { | ||
| let childItem = currentSubItems.find((f) => f.type === 'folder' && f.filename === directoryName); | ||
| currentPath = path.join(currentPath, directoryName); | ||
| if (!childItem) { | ||
| childItem = { | ||
| uid: dir?.meta?.uid || uuid(), | ||
| pathname: currentPath, | ||
| name: dir?.meta?.name || directoryName, | ||
| seq: dir?.meta?.seq, | ||
| filename: directoryName, | ||
| collapsed: true, | ||
| type: 'folder', | ||
| items: [] | ||
| }; | ||
| currentSubItems.push(childItem); | ||
| } | ||
| currentSubItems = childItem.items; | ||
| } | ||
| } | ||
|
|
||
| // Add files | ||
| for (const { payload: file } of files) { | ||
| const isCollectionRoot = file.meta.collectionRoot ? true : false; | ||
| const isFolderRoot = file.meta.folderRoot ? true : false; | ||
|
|
||
| if (isCollectionRoot) { | ||
| collection.root = file.data; | ||
| continue; | ||
| } | ||
|
|
||
| if (isFolderRoot) { | ||
| const folderPath = path.dirname(file.meta.pathname); | ||
| const folderItem = findItemInCollectionByPathname(collection, folderPath); | ||
| if (folderItem) { | ||
| if (file?.data?.meta?.name) { | ||
| folderItem.name = file?.data?.meta?.name; | ||
| } | ||
| folderItem.root = file.data; | ||
| if (file?.data?.meta?.seq) { | ||
| folderItem.seq = file.data?.meta?.seq; | ||
| } | ||
| } | ||
| continue; | ||
| } | ||
|
|
||
| const dirname = path.dirname(file.meta.pathname); | ||
| const subDirectories = getSubdirectoriesFromRoot(collection.pathname, dirname); | ||
| let currentPath = collection.pathname; | ||
| let currentSubItems = collection.items; | ||
| for (const directoryName of subDirectories) { | ||
| let childItem = currentSubItems.find((f) => f.type === 'folder' && f.filename === directoryName); | ||
| currentPath = path.join(currentPath, directoryName); | ||
| if (!childItem) { | ||
| childItem = { | ||
| uid: uuid(), | ||
| pathname: currentPath, | ||
| name: directoryName, | ||
| collapsed: true, | ||
| type: 'folder', | ||
| items: [] | ||
| }; | ||
| currentSubItems.push(childItem); | ||
| } | ||
| currentSubItems = childItem.items; | ||
| } | ||
|
|
||
| if (file.meta.name !== 'folder.bru' && !currentSubItems.find((f) => f.name === file.meta.name)) { | ||
| const currentItem = find(currentSubItems, (i) => i.uid === file.data.uid); | ||
| if (currentItem) { | ||
| currentItem.name = file.data.name; | ||
| currentItem.type = file.data.type; | ||
| currentItem.seq = file.data.seq; | ||
| currentItem.tags = file.data.tags; | ||
| currentItem.request = file.data.request; | ||
| currentItem.filename = file.meta.name; | ||
| currentItem.pathname = file.meta.pathname; | ||
| currentItem.settings = file.data.settings; | ||
| currentItem.examples = file.data.examples; | ||
| currentItem.draft = null; | ||
| currentItem.partial = file.partial; | ||
| currentItem.loading = file.loading; | ||
| currentItem.size = file.size; | ||
| currentItem.error = file.error; | ||
| } else { | ||
| currentSubItems.push({ | ||
| uid: file.data.uid, | ||
| name: file.data.name, | ||
| type: file.data.type, | ||
| seq: file.data.seq, | ||
| tags: file.data.tags, | ||
| request: file.data.request, | ||
| settings: file.data.settings, | ||
| examples: file.data.examples, | ||
| filename: file.meta.name, | ||
| pathname: file.meta.pathname, | ||
| draft: null, | ||
| partial: file.partial, | ||
| loading: file.loading, | ||
| size: file.size, | ||
| error: file.error | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batch reducer drops transient flags from addDir/addFile events.
The single-item reducers mark transient folders/files using state.tempDirectories. The batch reducer omits this, so transient requests can be treated as normal items and folders won’t inherit transient state.
🐛 Suggested fix to preserve transient behavior
- for (const [collectionUid, collectionItems] of itemsByCollection) {
+ for (const [collectionUid, collectionItems] of itemsByCollection) {
const collection = findCollectionByUid(state.collections, collectionUid);
if (!collection) continue;
+ const tempDirectory = state.tempDirectories?.[collectionUid];
// Process directories first to ensure folder structure exists
const directories = collectionItems.filter((i) => i.eventType === 'addDir');
const files = collectionItems.filter((i) => i.eventType === 'addFile');
// Add directories
for (const { payload: dir } of directories) {
+ const isTransientDir = tempDirectory && dir.meta.pathname.startsWith(tempDirectory);
const subDirectories = getSubdirectoriesFromRoot(collection.pathname, dir.meta.pathname);
let currentPath = collection.pathname;
let currentSubItems = collection.items;
for (const directoryName of subDirectories) {
let childItem = currentSubItems.find((f) => f.type === 'folder' && f.filename === directoryName);
currentPath = path.join(currentPath, directoryName);
if (!childItem) {
childItem = {
uid: dir?.meta?.uid || uuid(),
pathname: currentPath,
name: dir?.meta?.name || directoryName,
seq: dir?.meta?.seq,
filename: directoryName,
collapsed: true,
type: 'folder',
- items: []
+ isTransient: isTransientDir,
+ items: []
};
currentSubItems.push(childItem);
+ } else if (isTransientDir && !childItem.isTransient) {
+ childItem.isTransient = true;
}
currentSubItems = childItem.items;
}
}
// Add files
for (const { payload: file } of files) {
+ const isTransientFile = tempDirectory && file.meta.pathname.startsWith(tempDirectory);
const isCollectionRoot = file.meta.collectionRoot ? true : false;
const isFolderRoot = file.meta.folderRoot ? true : false;
if (isCollectionRoot) {
collection.root = file.data;
continue;
}
@@
const subDirectories = getSubdirectoriesFromRoot(collection.pathname, dirname);
let currentPath = collection.pathname;
let currentSubItems = collection.items;
for (const directoryName of subDirectories) {
let childItem = currentSubItems.find((f) => f.type === 'folder' && f.filename === directoryName);
currentPath = path.join(currentPath, directoryName);
if (!childItem) {
childItem = {
uid: uuid(),
pathname: currentPath,
name: directoryName,
collapsed: true,
type: 'folder',
- items: []
+ isTransient: isTransientFile,
+ items: []
};
currentSubItems.push(childItem);
+ } else if (isTransientFile && !childItem.isTransient) {
+ childItem.isTransient = true;
}
currentSubItems = childItem.items;
}
if (file.meta.name !== 'folder.bru' && !currentSubItems.find((f) => f.name === file.meta.name)) {
const currentItem = find(currentSubItems, (i) => i.uid === file.data.uid);
if (currentItem) {
@@
currentItem.loading = file.loading;
currentItem.size = file.size;
currentItem.error = file.error;
+ currentItem.isTransient = isTransientFile;
} else {
currentSubItems.push({
uid: file.data.uid,
name: file.data.name,
type: file.data.type,
seq: file.data.seq,
tags: file.data.tags,
request: file.data.request,
settings: file.data.settings,
examples: file.data.examples,
filename: file.meta.name,
pathname: file.meta.pathname,
draft: null,
partial: file.partial,
loading: file.loading,
size: file.size,
- error: file.error
+ error: file.error,
+ isTransient: isTransientFile
});
}
}🤖 Prompt for AI Agents
In `@packages/bruno-app/src/providers/ReduxStore/slices/collections/index.js`
around lines 2709 - 2819, The batch reducer must preserve the transient
folder/file flags that single-item reducers set via state.tempDirectories: when
creating or updating folder objects in the directories loop (referencing
variables directories, childItem, dir.meta.uid) and when creating/updating file
objects in the files loop (referencing files, currentSubItems, file.data.uid and
file.meta.uid), check state.tempDirectories for the corresponding uid
(dir.meta.uid or file.data.uid) and apply the same transient marker/behavior
used by the single-item reducers (e.g., set the same transient field or metadata
on childItem or new file object and clean up state.tempDirectories if the
single-item logic removes it). Ensure both code paths (new creation and
existing-item update) mirror the single-item reducer’s handling of
state.tempDirectories so transient requests are preserved.
| "yup": "^0.32.11", | ||
| "lmdb": "^3.2.6" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find Electron build config files
echo "=== Looking for Electron build config files ==="
find . -type f \( -name "electron-builder.json*" -o -name "electron-builder.js" -o -name "forge.config.js" -o -name "webpack.*.js" \) | head -20
echo ""
echo "=== Checking main package.json for build scripts ==="
cat -n package.json | grep -A 5 -B 5 '"build"'
echo ""
echo "=== Checking bruno-electron package.json for scripts and native module config ==="
head -100 packages/bruno-electron/package.json | cat -n
echo ""
echo "=== Searching for native module rebuild configuration ==="
rg -n "asarUnpack|electron-rebuild|nativeModules|native.*module|nodeIntegration|preload|asar" packages/ --type json
echo ""
echo "=== Looking for build config in bruno-electron ==="
ls -la packages/bruno-electron/ | grep -E "(webpack|build|electron-builder|forge)"
echo ""
echo "=== Checking if LMDB is already handled elsewhere ==="
rg -n "lmdb" --type jsonRepository: usebruno/bruno
Length of output: 6019
🏁 Script executed:
#!/bin/bash
cat -n packages/bruno-electron/electron-builder-config.jsRepository: usebruno/bruno
Length of output: 3206
Add asarUnpack configuration for LMDB pre-built binaries.
LMDB uses pre-built binaries that cannot load from within an asar archive. Add asarUnpack to the electron-builder config to unpack LMDB binaries, or set asar: false if asar packaging isn't required.
Example fix:
asar: true,
asarUnpack: ['node_modules/@lmdb/**/*', 'node_modules/lmdb/**/*']🤖 Prompt for AI Agents
In `@packages/bruno-electron/package.json` around lines 75 - 76, The electron
build config in package.json needs to unpack LMDB native binaries so they aren't
loaded from inside the asar; update the electron-builder config (the JSON object
that contains "asar") to either set "asar": false or add an "asarUnpack" entry
that matches LMDB packages (e.g., include patterns for "node_modules/lmdb/**"
and "node_modules/@lmdb/**"); modify the package.json electron-builder section
accordingly so the LMDB pre-built binaries are extracted at build time.
| const batchAggregator = getAggregator(win); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass collectionUid into getAggregator to avoid cross-collection batching.
getAggregatorKey() incorporates collectionUid, but these call sites omit it, so different collections in the same window will share the same batch queue and flush timing.
🔧 Suggested fix
- const batchAggregator = getAggregator(win);
+ const batchAggregator = getAggregator(win, collectionUid);- const batchAggregator = getAggregator(win);
+ const batchAggregator = getAggregator(win, collectionUid);- const batchAggregator = getAggregator(win);
+ const batchAggregator = getAggregator(win, collectionUid);Also applies to: 458-459, 685-686
🤖 Prompt for AI Agents
In `@packages/bruno-electron/src/app/collection-watcher.js` around lines 341 -
342, The batch aggregator is being shared across collections because
getAggregator is called without the collectionUid; update all call sites that
create the aggregator (e.g., where batchAggregator is assigned) to pass the
collectionUid into getAggregator so the key generation via
getAggregatorKey(collectionUid, ...) isolates queues per collection; search for
invocations of getAggregator (and places using batchAggregator) and add the
collectionUid argument to each to prevent cross-collection batching and
incorrect flush timing.
| invalidateDirectory(collectionPath, dirPath) { | ||
| this._ensureInitialized(); | ||
|
|
||
| if (!this.db) { | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| const prefix = `${collectionPath}\0${dirPath}`; | ||
| const keysToDelete = []; | ||
|
|
||
| // Find all keys that start with this directory prefix | ||
| for (const { key } of this.db.getRange({ start: prefix })) { | ||
| // Check if key still starts with our prefix (getRange might go beyond) | ||
| if (!key.startsWith(prefix)) { | ||
| break; | ||
| } | ||
| keysToDelete.push(key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/bruno-electron/src/store/parsed-file-cache.js | head -50Repository: usebruno/bruno
Length of output: 1761
🏁 Script executed:
wc -l packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 116
🏁 Script executed:
rg -n "import.*path|require.*path" packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 89
🏁 Script executed:
rg -n "invalidateDirectory\(" packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 110
🏁 Script executed:
rg -n "invalidateDirectory\(" --type js --type tsRepository: usebruno/bruno
Length of output: 293
🏁 Script executed:
sed -n '219,240p' packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 675
🏁 Script executed:
sed -n '655,670p' packages/bruno-electron/src/app/collection-watcher.jsRepository: usebruno/bruno
Length of output: 586
🏁 Script executed:
rg -n "invalidateDirectory" packages/bruno-electron/src/app/collection-watcher.js -B5 -A5Repository: usebruno/bruno
Length of output: 384
🏁 Script executed:
rg -n "this.db.put\|this.db.set" packages/bruno-electron/src/store/parsed-file-cache.js -B3 -A3Repository: usebruno/bruno
Length of output: 40
🏁 Script executed:
sed -n '140,180p' packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 1032
🏁 Script executed:
rg -n "_getKey\(" packages/bruno-electron/src/store/parsed-file-cache.js -A5Repository: usebruno/bruno
Length of output: 937
🏁 Script executed:
sed -n '100,130p' packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 902
🏁 Script executed:
sed -n '219,250p' packages/bruno-electron/src/store/parsed-file-cache.jsRepository: usebruno/bruno
Length of output: 941
🏁 Script executed:
# Verify the startsWith behavior with string prefix collision
python3 << 'EOF'
# Demonstrate the prefix collision issue
collectionPath = "/home/user/project"
dirPath1 = "/foo/bar"
dirPath2 = "/foo/barbaz"
prefix1 = f"{collectionPath}\0{dirPath1}"
key1 = f"{collectionPath}\0{dirPath1}/file.bru"
key2 = f"{collectionPath}\0{dirPath2}/file.bru"
print(f"Prefix: {prefix1}")
print(f"Key1 (correct): {key1}")
print(f"Key1 starts with prefix: {key1.startswith(prefix1)}")
print()
print(f"Key2 (sibling): {key2}")
print(f"Key2 starts with prefix: {key2.startswith(prefix1)}")
print()
# Now with the fix
dirPath1_normalized = f"{dirPath1}/"
prefix1_fixed = f"{collectionPath}\0{dirPath1_normalized}"
print(f"Fixed prefix: {prefix1_fixed}")
print(f"Key1 starts with fixed prefix: {key1.startswith(prefix1_fixed)}")
print(f"Key2 starts with fixed prefix: {key2.startswith(prefix1_fixed)}")
EOFRepository: usebruno/bruno
Length of output: 380
Add a trailing separator to prevent sibling directory prefix collisions during cache invalidation.
The string prefix /foo/bar will match both /foo/bar/file.bru and /foo/barbaz/file.bru, causing unintended cache invalidation of sibling directories. Normalize the prefix with a trailing separator to ensure only actual children are removed.
🛠️ Suggested fix
- const prefix = `${collectionPath}\0${dirPath}`;
+ const normalizedDirPath = dirPath.endsWith(path.sep) ? dirPath : `${dirPath}${path.sep}`;
+ const prefix = `${collectionPath}\0${normalizedDirPath}`;🤖 Prompt for AI Agents
In `@packages/bruno-electron/src/store/parsed-file-cache.js` around lines 219 -
236, The invalidateDirectory implementation builds a prefix that can collide
with sibling directory names; change how prefix is constructed in
invalidateDirectory so it includes a trailing separator after dirPath (e.g.,
append the same separator used between collectionPath and dirPath) and use that
normalized prefix for both the getRange start and the startsWith check to ensure
only children of the exact directory are matched (update variable prefix and any
comparisons in invalidateDirectory accordingly).
Description
This PR addresses performance issues encountered when mounting large collections.
In the previous implementation, every file in a collection went through an expensive processing pipeline:
For large collections (e.g. ~3000 requests), this resulted in thousands of rapid re-renders, causing the UI to become sluggish and temporarily unresponsive until all files were processed.
What’s changed
1. Introduced a caching layer
We now cache parsed files and only re-parse them when necessary:
This ensures that unchanged files are served directly from cache, significantly reducing redundant parsing work.
2. Batched UI updates
Previously, each file triggered its own UI re-render. In the new implementation, UI updates are performed in batches, triggering a single re-render per batch instead of per file.
This greatly reduces renderer load and results in a much smoother and more responsive UI, especially for large collections.
Overall, these changes dramatically improve collection mount performance and user experience without altering existing behavior.
Contribution Checklist:
JIRA: https://usebruno.atlassian.net/browse/BRU-2407
Summary by CodeRabbit
New Features
Performance Improvements
✏️ Tip: You can customize this high-level summary in your review settings.