Conversation
size-limit report 📦
|
this._flushTimeout = setTimeout(() => { | ||
this.flush(); | ||
}, 1); | ||
if (!localParentId || this._sentSpans.has(localParentId)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also run cleanup on sent spans here (the _flushSentSpanCache
method)?
The _sentSpans
is checked for both flush and export, but only cleaned up in flush.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm but export calls flush (via _debouncedFlush
) so it should also cleanup, I think?
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
3ecf73e
to 3ad8cc2
Compare There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like the debounce
touch as we talked about it earlier. Not sure if I agree with the "code cleanup" part as much and am particularly concerned about the for-loop based building of sent spans rather than using flatMap()
regarding perf.
const finishedSpans: ReadableSpan[] = []; | ||
for (const bucket of this._finishedSpanBuckets) { | ||
if (bucket) { | ||
for (const span of bucket.spans) { | ||
finishedSpans.push(span); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this relly chaeper than what we had? As far as I'm aware .map()
/.flatMap()
is always faster than building an array on the fly due to resizing and the interpreter not knowing the type of the items ahead of time.
Also, what's the expected amount of items here as if it is a handful we might be bikeshedding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
buckets can be up to 1000 items large (which also does happen in reality). So not massive but also not tiny, basically :)
It is still always creating a new array (Array.from()
), I think map/flatMap themselves are fine! 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
buckets can be up to 1000 items large (which also does happen in reality). So not massive but also not tiny, basically :)
Yeah, I think that size is big enough to care about these performance measures.
It is still always creating a new array (Array.from()), I think map/flatMap themselves are fine! 🤔
I'll leave this to you then. I'm speaking from experience but lack any hard evidence to back the claim up, especially anything recent. Up to you to take it, go with your own way, or put the time down to actually measure :)
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
I added some more code comments based on @BYK's comments, to make it (hopefully) easier to follow what the exporter does! 🙏 |
/** | ||
* Try to flush any pending spans immediately. | ||
* This is called internally by the exporter (via _debouncedFlush), | ||
* but can also be triggered externally if we force-flush. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* but can also be triggered externally if we force-flush. | |
* but can also be triggered externally if we want a force-flush. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, still a go from me!
function invokeFunc(): unknown { | ||
cancelTimers(); | ||
callbackReturnValue = func(); | ||
return callbackReturnValue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just giving my two cents, it's not a massive deal.
I feel like we could be saving a line by assigning the value at the same time as we return it: 🙂
return callbackReturnValue; | |
return callbackReturnValue = func(); |
I personally find this more readable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually minification algorithms (terser) will do this for us anyway, I prefer the new line because it usually make git blame a bitter easier to use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation! that makes sense!
I haven't thought about this.
I guess it would be easier to debug too now that I think about it
function invokeFunc(): unknown { | ||
cancelTimers(); | ||
callbackReturnValue = func(); | ||
return callbackReturnValue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually minification algorithms (terser) will do this for us anyway, I prefer the new line because it usually make git blame a bitter easier to use.
0fbb5ab
to c6e9af2
Compare It was actually a great pointer from @BYK that we should really actually profile/measure if the for loop is faster than the flatMap, instead of just assuming so. I ran a simple benchmark here: https://jsbench.me/84mc07ydz3/1 I re-ran it a few times, the difference is not huge and it also changed from time to time which was faster. So overall likely not worth the change! |
cb5265c
into develop Uh oh!
There was an error while loading. Please reload this page.
Follow up to #16416, slightly moving things around and making some small improvements to performance and logic (some not related to that PR but another improvement we noticed when BYK looked int this):
isSpanAlreadySent
has been called for every single span that ended, it is quite high impact, and meant a little bit of additional work (although it should not necessarily run too often, but still). Instead, we now simply check formap.has(id)
which should be good enough for what we want to achieve, IMHO. We still clean up the sent spans when flushing, so old stuff should still go away eventually.For this I moved the already existing
debounce
function from replay to core. We re-implement this in replay with a different setTimeout impl. which is needed for some angular stuff.