Jupyter notebook is certainly a great option. You can also use Anaconda platform. Download it for working on python(right from opening .ipynb files to host of other activities for data science are facilitated there.
I have this problem but this is my error
❌ Error creando PriceSchedule: {
"errors": [
{
"id": "3e8b4645-20bc-492a-93c9-x",
"status": "409",
"code": "ENTITY_ERROR.NOT_FOUND",
"title": "There is a problem with the request entity",
"detail": "The resource 'inAppPurchasePricePoints' with id 'eyJzIjoiNjc0NzkwNTgyMSIsInQiOiJBRkciLCJwIjoiMTAwMDEifQ' was not found.",
"source": {
"pointer": "/included/relationships/inAppPurchasePricePoints/data/0/id"
}
}
]
}
from moviepy.editor import VideoFileClip, AudioClip import numpy as np
Re-define file path after environment reset
video_path = "/mnt/data/screen-٢٠٢٥٠٦٢٩-١٥٢٢٢٤.mp4"
Reload the video and remove original audio
full_video = VideoFileClip(video_path).without_audio()
Define chunk length in seconds
chu
nk_length =
no, I don't have such a file
wqwqqwwq
Perhaps less efficient, but concise for the basic escapes without libraries using JDK 15+
public static String escape(String s) {
for (char c : "\\\"bfnrt".toCharArray())
s = s.replace(("\\" + c).translateEscapes(), "\\" + c);
return s;
}
try to pass it directly "
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
" to
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:1.9.20"
I think that approach works, but has some drawbacks like limiting the capabilities of a event driven architecture, that is asynchronous by nature or ending up with tons of messages to be sent because you are adding too much latency.
What i would do to stop some bots to flood the kafka pipelines, would be to implement a debounce system. i.e a bot would need to cooldown for a period before being able to send a message. That way you are not sending one message a time from all of the bots, but by making sure the more active bots only send every certain milliseconds, you allow the less active bots to send their messages
What is the big O of the algorithm below?
It is not an algorithm to begin with because the operation (in the lack of a better word) you described does not fit the standard definition of what constitutes an algorithm. If it is not an algorithm, you probably should not describe it using big O notation.
As pointed out in the previous answer, the use of a PRNG is probabilistically distributed, so the time bounds would converge to a finite set of steps eventually. The rest of my answer will now assume a truly random number generating function as part of your "algorithm".
Knuth describes an algorithm in TAOCP [1, pp. 4-7] as a "finite set of rules that gives a sequence of operations for solving a specific type of problem", highlighting the characteristics of finiteness, definiteness, input, output, effectiveness.
For concision, your described operation does not:
Moreover, the lack of finiteness prompting this operation to potentially run without ever finding a 5 digit number not in the DB perfectly classifies it as an undecidable problem.
Recall that decidability means whether or not a decision problem can be correctly solved if there exists an effective method (finite time deterministic procedure) for solving it [2].
For same reason and akin to the Halting problem [3], your operation is undecidable because it is impossible to construct an algorithm that always correctly determines [see 4] a new 5 digit random number effectively. The operation described is merely a problem statement, and not an algorithm because it still needs an algorithm to correctly and effectively solve it.
You might have to consider Kolmogorov complexity [5] in analyzing your problem because it allows you to describe (and prove) the impossibility of undecidable problems like the Halting problem and Cantor's diagonal argument [6].
An answer from this post suggests the use of Arithmetic Hierarchy [7] (as opposed to asymptotic analysis) as the appropriate measure of the complexity of undecidable problems, but even I am still struggling to comprehend this.
Here's two options that worked for me that may also work for you.
Use the other link to the DevOps instance
Try a different browser. In my case, chrome and edge stopped working yet firefox works fine.
updating with below code after suggestions on comment section , solves the issue:
sns.pointplot(x=group['Month'], y=group['Rate'], ax=ax1)
START OF PART 1 (Base64 PDF content)
JVBERi0xLjQKJeLjz9MKMSAwIG9iago8PC9UeXBlIC9DYXRhbG9nCi9QYWdlcyAyIDAgUgovT3Blb nRpbWVzIDEgMCBSCi9NZXRhZGF0YSAzIDAgUgovUm9vdCA0IDAgUgo+PgplbmRvYmoKMiAwIG9iag o8PC9UeXBlIC9QYWdlcwovS2lkcyBbNSAwIFJdCi9Db3VudCAxCj4+CmVuZG9iagozIDAgb2JqCjw 8L01ldGFkYXRhIDw8L0NyZWF0b3IgKENoYXQgR1BUIFBERiBJbnRlcmZhY2UpL0NyZWF0aW9uRGF0 ZSAoRDoyMDI1MDYxMDEyMzQ1NS0wNScwMCcpPj4KL1R5cGUgL01ldGFkYXRhPj4KZW5kb2JqCjQgM CBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDIgMCBSCi9NZWRpYUJveCBbMCAwIDU5NSA4NDJdCi9 Db250ZW50cyA2IDAgUgovUmVzb3VyY2VzIDw8L0ZvbnQgPDwvRjEgNyAwIFI+Pj4KPj4KZW5kb2Jq CjUgMCBvYmoKPDwvVHlwZSAvUGFnZQovUGFyZW50IDIgMCBSCi9NZWRpYUJveCBbMCAwIDU5NSA4NDJ dCi9Db250ZW50cyA3IDAgUgovUmVzb3VyY2VzIDw8L0ZvbnQgPDwvRjEgNyAwIFI+Pj4KPj4KZW5k b2JqCjYgMCBvYmoKPDwvTGVuZ3RoIDQzNCA+PgpzdHJlYW0KQlQKIDAgMCBUZgoxMiBUZiAoRGF0ZT ogICAgICAgICAgIFRoZSBQcmluY2lwYWwgVG8gdXNlIHRoaXMgYXBwbGljYXRpb24gdG8gaW5zdGFs bCBtdWx0aW1lZGlhIHN5c3RlbSBpbiB0aGUgY2xhc3Jvb21cclxuCkZpcnN0LCB3ZSBhcHByZWNpY XRlIHRoYXQgdGhlIGluc3RhbGxhdGlvbiBvZiBtdWx0aW1lZGlhIGlzIHZlcnkgdXNlZnVsIGZvciB sZWFybmluZyBhbmQgdGVhY2hpbmcgZWZmZWN0aXZlbHkgc3R1ZGVudHMuIFdpdGggdGhpcyBzeXN0Z W0sIHRlYWNoZXJzIGNhbiBlYXNpbHkgZXhwbGFpbiBjb25jZXB0cyB1c2luZyB2aWRlb3MgYW5kIGF uaW1hdGlvbnMgdG8gbWFrZSBsZXNzb25zIG1vcmUgaW50ZXJlc3RpbmcuXHJcblRoYW5rIHlvdSB2Y XJpb3VzbHkuXHJcblxuU2lua2VybHksXHJcblx0TGFteWVhIFNoYXJtZW5cclxuUm9sbCBObzogX1xc XFxcbkNsYXNzOiBfXFxcbgoK Ci0tLQ==
END OF PART 1
---
Try windows_disk_utils
package
Although it's an old question, I think it's worth sharing the solution. After struggling for 4–5 hours myself, I finally resolved it by changing the network mode to 'Mirrored' in the WSL settings.
CORS (Cross-Origin Resource Sharing) is a security policy that prevents domains from being accessed or manipulated by other domains that are not allowed. I assume you are trying to perform an operation from a different domain.
There are three ways to avoid being blocked by this policy:
The domain you are trying to manipulate explicitly allows your domain to run its scripts.
You perform the operation using a local script or a browser with CORS disabled (e.g., a CORS-disabled Chrome).
You perform the operation within the same domain or its subdomains. You can test this easily in the browser console via the inspector.
Here is a useful link that addresses a similar problem:
Error: Permission denied to access property "document"
I realized how to solve this, the problem was that all the pages were showing a mix of spanish, english labels and that stuff, I tought that it was something about this
var cookie = HttpContext.Request.Cookies[CookieCultureName];
where it takes many config values such as language and that somehow it chooses one of the two .resx files that have all the labels values but it was not the case, I solved it by changing it manually on inspection -> cookies -> Localization.CurrentUICulture:
enter image description here
But I still don´t know where this value comes from, kinda weird but it is what it is
I also had a git repo inside gdrive (for convenience as of keeping code along with other stuff), somehow gdrive managed to corrupt itself to the point where some files inside the project got corrupted and inaccessible locally (cloud version remained). The only solution was to delete the gdrive cache and db.
Plus, paths inside gdrive end up including "my drive", where the space is a problem with some tools (Flutter).
And, you also end up syncing .venv (as an example for python) and other temp files as gdrive has no exclude patterns.
Therefore after sorting the situation in my own repo caused by this combination, I moved the repo into it's own folder.
Maybe it's one datapoint, but in my case, this combination got corrupted and wasted time.
for Otel Collector, `receiver/sqlserver`:
Pls make sure to Use MSSQL V: 2022, I was using MSSQL Server Version 2019.
Pls Use SQLSERVER, instead of SQLEXPRESS Edition.
Pls provide MSSQL User Privileges as below:
GRANT VIEW SERVER PERFORMANCE STATE TO <USERNAME>
For reference check this Github Issue: https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/40937
File /usr/local/lib/python3.11/shutil.py:431, in copy(src, dst, follow_symlinks)
429 if os.path.isdir(dst):
430 dst = os.path.join(dst, os.path.basename(src))
--> 431 copyfile(src, dst, follow_symlinks=follow_symlinks)
432 copymode(src, dst, follow_symlinks=follow_symlinks)
433 return dst
I just had this same error pop up when trying to do the same thing with my data. Does cID by any chance have values that are not just a simple numbering 1:n(clusters)? My cluster identifiers were about 5 digits long and when I changed the ID values to numbers 1:n(clusters), the error magically disappeared.
In your .NET Core project (that references the .NET Framework lib), add System.Security.Permissions
NuGet package — this ensures Newtonsoft.Json won’t fail with FileNotFoundException
.
Please check out the instructions at the bottom of this README. This solved it for me.
https://github.com/Weile-Zheng/word2vec-vector-embedding?tab=readme-ov-file
Please check out the instructions at the bottom of this README. This solved it for me.
https://github.com/Weile-Zheng/word2vec-vector-embedding?tab=readme-ov-file
Look this:https://github.com/modelcontextprotocol/python-sdk/issues/423
I believe this is a problem of MCP.
I pity the OP lumbered with such a silly and inflexible accuracy target for their implementation as shown above. They don't deserve the negative votes that this question got but their professor certainly does!
By way of introduction I should point out that we would not normally approximate sin(x) over such a wide range as 0-pi because to do so violates one of the important heuristics generally used in series expansions used to approximate functions in computing namely:
IOW each successive term is a smaller correction to the sum than its predecessor and they are typically summed from highest order term first usually by Horner's method which lends itself to modern computers FMA instructions which can process a combined multiply and add with only one rounding each machine cycle.
To illustrate how range reduction helps I the test code below does the original range test with different numbers of terms N for argument limits of pi
, pi/2
and pi/4
. The first case evaluation for pi
thrashes about wildly at first before eventually settling down. The latter pi/4
requires just 4 terms to converge.
In fact we can get away with the wide range in this instance because sin, cos and exp are all highly convergent polynomial series with a factorial in the denominator - although the large alternating terms added to partial sums when x ~ pi do cause some loss of precision at the high end of the range.
We would normally approximate over a reduced range of pi/2, pi/4 or pi/6. However taking on this challenge head on there are several ways to do it. The different simple methods of summing the Taylor series can give a few possible answers depending on how you add them up and whether or not you accumulate the partial sum into a double precision variable. There is no compelling reason to prefer any one of them over another. The fastest method is as good as any.
There is really nothing good about the professor's recommended method. It is by far the most computationally expensive way to do it and for good measure it will violate the original specification of computing the Taylor series when N>=14 because the factorial result for 14! cannot be accurately represented in a float32 - the value is truncated.
The OP's original method was perfectly valid and neatly sidesteps the risk of overflow of xN and N! by refining the next term to be added for each iteration inside the summation loop. The only refinement would be to step the loop in increments of 2 and so avoid computing n = 2*i
.
@user85421's comment reminded me of a very old school way to obtain a nearly correct polynomial approximation for cos/sin by nailing the result obtained at a specific point to be exact. Called "shooting" and usually done for boundary value problems it is the simplest and easiest to understand of the more advanced methods to improve polynomial accuracy.
In this instance we adjust the very last term in xN so that it hits the target of cos(pi) = -1
exactly. It can be manually tweaked from there to get a crude nearly equal ripple solution that is about 25x more accurate than the classical Taylor series.
The fundamental problem with the Taylor series is that it is ridiculously over precise near zero and increasingly struggles as it approaches pi. This hints that we might be able to find a compromise set of coefficients that is just good enough everywhere in the chosen range.
The real magic comes from constructing a Chebyshev equal ripple approximation using the same number of terms. This is harder to do for 32 bit floating point and since a lot of modern CPUs now have double precision arithmetic that is as fast as single precision you often find double precision implementations lurking inside nominally float32 wrappers.
It is possible to rewrite a Taylor series into a Chebyshev expansion by hand. My results were obtained using a Julia numerical code ARMremez.jl for rational approximations.
To get the best possible coefficient set for fixed precision working in practice requires a huge optimisation effort and isn't always successful but to get something that is good enough is relatively easy. The code below shows the various options I have discussed and sample coefficients. The framework used tests enough of the range of x values to put tight bounds on worst case absolute error |cos(x)-poly_cos(x)|.
In real applications of approximation we would usually go for minimum relative error | 1 - poly_cos(x)/cos(x)| (so that ideally all the bits in the mantissa are right). However the zero at pi/2 would make life a bit too interesting for a simple quick demo so I have used absolute error here instead.
The 6 term Chebyshev approximation is 80x more accurate but the error is in the sense that takes cos(x) outside the valid range |cos(x)| <= 1
(highly undesirable). That could easily be fixed by rescaling. They have been written in a hardcoded Horner fixed length polynomial implementation avoiding any loops (and 20-30% faster as a result).
The worst case error in the 7 term Chebyshev approximation computed in double precision is 1000x better at <9e-8 without any fine tuning. The theoretical limit with high precision arithmetic is 1.1e-8 which is below the 3e-8 Unit in the Last Place (ULP) threshold on 0.5-1.0. There is a good chance that it could be made correctly rounded for float32 with sufficient effort. If not then 8 terms will nail it.
One advantage of asking students to optimise their polynomial function on a range like 0-pi is that you can exhaustively test it for every possible valid input value x
fairly quickly. Something that is usually impossible for double precision functions. A proper framework for doing this much more thoroughly than my hack below was included in a post by @njuffa about approximating erfc.
The test reveals that the OP's solution and the book solution are not that different, but the official recommended method is 30x slower or just 10x slower if you cache N!. This is all down to using pow(x,N)
including the slight rounding differences in the sum and repeatedly recomputing factorial N (which leads to inaccuracies for N>14).
Curiously for a basic Taylor series expansion the worst case error is not always right at the end of the range - something particularly noticeable on the methods using pow()
Here is the results table:
Description | cos(pi) | error | min_error | x_min | max_error | x_max | time (s) |
---|---|---|---|---|---|---|---|
prof Taylor | -0.99989957 | 0.000100434 | -1.436e-07 | 0.94130510 | 0.000100672 | 3.14159226 | 10.752 |
pow Taylor | -0.99989957 | 0.000100434 | -1.436e-07 | 0.94130510 | 0.000100672 | 3.14159226 | 2.748 |
your Cosinus | -0.99989957 | 0.000100434 | -1.570e-07 | 0.80652559 | 0.000100791 | 3.14157438 | 0.301 |
my Taylor | -0.99989951 | 0.000100493 | -5.476e-08 | 1.00042307 | 0.000100493 | 3.14159274 | 0.237 |
shoot Taylor | -0.99999595 | 4.0531e-06 | -4.155e-06 | 2.84360051 | 4.172e-06 | 3.14159012 | 0.26 |
Horner Chebyshev 6 | -1.00000095 | -9.537e-07 | -1.330e-06 | 3.14106655 | 9.502e-07 | 2.21509051 | 0.177 |
double Horner Cheby 7 | -1.00000000 | 0 | -7.393e-08 | 2.34867692 | 8.574e-08 | 2.10044718 | 0.188 |
Here is the code that can be sued to experiment with the various options. The code is C rather than Java but written in such a way that it should be easily ported to Java.
#include <stdio.h>
#include <math.h>
#include <time.h>
#define SLOW // to enable the official book answer
//#define DIVIDE // use explicit division vs multiply by precomputed reciprocal
double TaylorCn[10], dFac[20], drFac[20];
float shootC6;
float Fac[20];
float C6[7] = { 0.99999922f, -0.499994268f, 0.0416598222f, -0.001385891596f, 2.42044015e-05f, -2.19788836e-07f }; // original 240 bit rounded down to float32
// ref float C7[8] = { 0.99999999f, -0.499999892f, 0.0416664902f, -0.001388780783f, 2.47699662e-05f, -2.70797754e-07f, 1.724760709e-9f }; // original 240 bit rounded down to float32
float C7[8] = { 0.99999999f, -0.499999892f, 0.0416664902f, -0.001388780783f, 2.47699662e-05f, -2.707977e-07f, 1.72478e-9f }; // after simple fine tuning
double dC7[8] = { 0.9999999891722795, -0.4999998918375135482, 0.04166649019522770258731, -0.0013887807826936648, 2.47699662157542654e-05, -2.707977544202106e-07, 1.7247607089243954e-09 };
// Chebeshev equal ripple approximations obtained from modified ARMremez rational approximation code
// C7 +/- 1.08e-8 (computed using 240bit FP arithmetic - coefficients are not fully optimised for float arithmetic) actually obtain 9e-8 (could do better?)
// C6 +/- 7.78e-7 actually obtain 1.33e-6 (with fine tuning could do better)
const float pi = 3.1415926535f;
float TaylorCos(float x, int ordnung)
{
double sum, term, mx2;
sum = term = 1.0;
mx2 = -x * x;
for (int i = 2; i <= ordnung; i+=2) {
term *= mx2 ;
#ifdef DIVIDE
sum += term / Fac[i]; // slower when using divide
#else
sum += term * drFac[i]; // faster to multiply by reciprocal
#endif
}
return (float) sum;
}
float fTaylorCos(float x)
{
return TaylorCos(x, 12);
}
void InitTaylor()
{
float x2, x4, x8, x12;
TaylorCn[0] = 1.0;
for (int i = 1; i < 10; i++) TaylorCn[i] = TaylorCn[i - 1] / (2 * i * (2 * i - 1)); // precomute the coefficients
Fac[0] = 1;
drFac[0] = dFac[0] = 1;
for (int i = 1; i < 20; i++)
{
Fac[i] = i * Fac[i - 1];
dFac[i] = i * dFac[i - 1];
drFac[i] = 1.0 / dFac[i];
if ((double)Fac[i] != dFac[i]) printf("float factorial fails for %i! %18.0f should be %18.0f error %10.0f ( %6.5f ppm)\n", i, Fac[i], dFac[i], dFac[i]-Fac[i], 1e6*(1.0-Fac[i]/dFac[i]));
}
x2 = pi * pi;
x4 = x2 * x2;
x8 = x4 * x4;
x12 = x4 * x8;
shootC6 = (float)(cos((double)pi) - TaylorCos(pi, 10)) / x12 * 1.00221f; // fiddle factor for shootC6 with 7 terms *1.00128;
}
float shootTaylorCos(float x)
{
float x2, x4, x8, x12;
x2 = x * x;
x4 = x2 * x2;
x8 = x4 * x4;
x12 = x4 * x8;
return TaylorCos(x, 10) + shootC6 * x12;
}
float berechneCosinus(float x, int ordnung) {
float sum, term, mx2;
sum = term = 1.0f;
mx2 = -x * x;
for (int i = 1; i <= (ordnung + 1) / 2; i++) {
int n = 2 * i;
term *= mx2 / ((n-1) * n);
sum += term;
}
return sum;
}
float Cosinus(float x)
{
return berechneCosinus(x, 12);
}
float factorial(int n)
{
float result = 1.0f;
for (int i = 2; i <= n; i++)
result *= i;
return result;
}
float profTaylorCos_core(float x, int n)
{
float sum, term, mx2;
sum = term = 1.0f;
for (int i = 2; i <= n; i += 2) {
term *= -1;
sum += term*pow(x,i)/factorial(i);
}
return (float)sum;
}
float profTaylorCos(float x)
{
return profTaylorCos_core(x, 12);
}
float powTaylorCos_core(float x, int n)
{
float sum, term;
sum = term = 1.0f;
for (int i = 2; i <= n; i += 2) {
term *= -1;
sum += term * pow(x, i) / Fac[i];
}
return (float)sum;
}
float powTaylorCos(float x)
{
return powTaylorCos_core(x, 12);
}
float Cheby6Cos(float x)
{
float sum, term, x2;
sum = term = 1.0f;
x2 = x * x;
for (int i = 1; i < 6; i++) {
term *= x2;
sum += term * C6[i];
}
return sum;
}
float dHCheby7Cos(float x)
{
double x2 = x*x;
return (float)(dC7[0] + x2 * (dC7[1] + x2 * (dC7[2] + x2 * (dC7[3] + x2 * (dC7[4] + x2 * (dC7[5] + x2 * dC7[6])))))); // cos 7 terms
}
float HCheby6Cos(float x)
{
float x2 = x * x;
return C6[0] + x2 * (C6[1] + x2 * (C6[2] + x2 * (C6[3] + x2 * (C6[4] + x2 * C6[5])))); // cos 6 terms
}
void test(const char *name, float(*myfun)(float), double (*ref_fun)(double), double xstart, double xend)
{
float cospi, cpi_err, x, ox, dx, xmax, xmin;
double err, res, ref, maxerr, minerr;
time_t start, end;
x = xstart;
ox = -1.0;
// dx = 1.2e-7f;
dx = 2.9802322387695312e-8f; // chosen to test key ranges of the function exhaustively
maxerr = minerr = 0;
xmin = xmax = 0.0;
start = clock();
while (x <= xend) {
res = (*myfun)(x);
ref = (*ref_fun)(x);
err = res - ref;
if (err > maxerr) {
maxerr = err;
xmax = x;
}
if (err < minerr) {
minerr = err;
xmin = x;
}
x += dx;
if (x == ox) dx += dx;
ox = x;
}
end = clock();
cospi = (*myfun)(pi);
cpi_err = cospi - cos(pi);
printf("%-22s %10.8f %12g %12g @ %10.8f %12g @ %10.8f %g\n", name, cospi, cpi_err, minerr, xmin, maxerr, xmax, (float)(end - start) / CLOCKS_PER_SEC);
}
void OriginalTest(const char* name, float(*myfun)(float, int), float target, float x)
{
printf("%s cos(%10.7f) using terms upto x^N\n N \t result error\n",name, x);
for (int i = 0; i < 19; i += 2) {
float cx, err;
cx = (*myfun)(x, i);
err = cx - target;
printf("%2i %-12.9f %12.5g\n", i, cx, err);
if (err == 0.0) break;
}
}
int main() {
InitTaylor(); // note that factorial 14 cannot be represented accurately as a 32 bit float and is truncated.
// easy sanity check on factorial numbers is to count the number of trailing zeroes.
float x = pi; // approx. PI
OriginalTest("Taylor Cosinus", berechneCosinus, cos(x), x);
OriginalTest("Taylor Cosinus", berechneCosinus, cos(x/2), x/2);
OriginalTest("Taylor Cosinus", berechneCosinus, cos(x/4), x/4);
printf("\nHow it would actually be done using equal ripple polynomial on 0-pi\n\n");
printf("Chebyshev equal ripple cos(pi) 6 terms %12.8f (sum order x^0 to x^N)\n", Cheby6Cos(x));
printf("Horner optimum Chebyshev cos(pi) 6 terms %12.8f (sum order x^N to x^0)\n", HCheby6Cos(x));
printf("Horner optimum Chebyshev cos(pi) 7 terms %12.8f (sum order x^N to x^0)\n\n", dHCheby7Cos(x));
printf("Performance and functionality tests of versions - professor's solution is 10x slower ~2s on an i5-12600 (please wait)...\n");
printf(" Description \t\t cos(pi) error \t min_error \t x_min\tmax_error \t x_max \t time\n");
#ifdef SLOW
test("prof Taylor", profTaylorCos, cos, 0.0, pi);
test("pow Taylor", powTaylorCos, cos, 0.0, pi);
#endif
test("your Cosinus", Cosinus, cos, 0.0, pi);
test("my Taylor", fTaylorCos, cos, 0.0, pi);
test("shoot Taylor", shootTaylorCos, cos, 0.0, pi);
test("Horner Chebyshev 6", HCheby6Cos, cos, 0.0, pi);
test("double Horner Cheby 7", dHCheby7Cos, cos, 0.0, pi);
return 0;
}
It is interesting to make the sum
and x2
variables double precision and observe the effect that has on the answers. If someone fancies running simulated annealing or another global optimiser to find the best possible optimised Chebyshev 6 & 7 float32 approximations please post the results.
I agree whole heartedly with Steve Summits final comments. You should think very carefully about risk of overflow of intermediate results and order of summation doing numerical calculations. Numerical analysis using floating point numbers follows different rules to pure mathematics and some rearrangements of an equation are very much better than others when you want to compute an accurate numerical value.
It's an old post but if you can do that, then Google, Microsoft or any large servers in the world can just crash by one client. And that's not how the Internet works! By requesting the server to send a resource, you - as the client, receives it chunk by chunk. And if you want to only request but not receive the byte, then by the time the server knew it send data to nowhere, it stops. Think of it as a electric wire, it allows electric to flow, right? If you cut the wire or connect the endpoint to nowhere then the electric is nowhere to flow.
One thing you can do is code some software and send it to people all over the world, the software you make will target specific website or server you want, then that's called DDoS and you've just made an malware! Those people installing your malware and turn their PC into a zombie machine sending request to your targeting server. Fulfilling large amount of request from all over the world make the server overload, and then shut down.
After all. What you've asking for is disgusting. It show no respect to the development world where it need to improve, not harm anybody. And for that reason I'm going to flag this post. Sorry.
prefer this document , this worked for me...
https://blog.devgenius.io/apache-superset-integration-with-keycloak-3571123e0acf
Just for future reference, another way to achieve the same (which is mpv
specific) is
play-music()
{
local selected=($(find "${PWD}" -mindepth 1 -maxdepth 1 -type f -iname "*.mp3" | LC_COLLATE=C sort | fzf))
mpv --playlist=<(printf "%s\n" "${selected[@]}") </dev/null &>/dev/null & disown
}
I had the same issue in previous version and found that some of my add-ons were bugging out some of the keyboard shortcuts. I set the add-ons to turn only when needed and reset my keyboard shortcuts from the Tool menu.
In iOS 18 and later, an official API has been added to retrieve the tag value, making it easier to implement custom pickers and similar components.
https://developer.apple.com/documentation/swiftui/containervalues/tag(for:)
func tag<V>(for type: V.Type) -> V? where V : Hashable
func hasTag<V>(_ tag: V) -> Bool where V : Hashable
Since it is a batch job, consider using GitHub actions from job that will run on a schedule. Use the bq utility.
Xo Jeeva
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Xo Jeeva
Just add this to your pubspec.yaml:
dependency_overrides:
video_player_android:
git:
url: https://github.com/dennisstromberg/video_player_android.git
This solved the problem for me
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
Depending on the number of occurrences you have for each key you can modify this JCL to meet your requirements
Mainframe Sort JCL - transpose the rows to column
thank you all, i had forgot to replace myproject.pdb.json on the server
there was the same character string in both 'bunchofstuff' and 'onlypartofstuff', so i used a rewrite rule where, if, say 'stuff' was in the url, nothing was done, otherwise, forward to the 'https://abc.123.whatever/bunchofstuff' url. one caveat is: if someone typed in the full url for 'bunchofstuff' or 'onlypartofstuff', the rule doesn't check for 'https' ... but, i don't think anyone would type in one of those longer url's by hand (they'd use a link w/ 'https' in it). but the main abc.123.whatever will forward.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Simulated price data (or replace with real futures data)
np.random.seed(42)
dates = pd.date_range(start='2010-01-01', periods=3000)
price = 100 + np.cumsum(np.random.normal(0, 1, size=len(dates)))
df = pd.DataFrame(data={'Close': price}, index=dates)
# Pure technical indicators: moving averages
df['SMA_50'] = df['Close'].rolling(window=50).mean()
df['SMA_200'] = df['Close'].rolling(window=200).mean()
# Strategy: Go long when 50 SMA crosses above 200 SMA (golden cross), exit when below
df['Position'] = 0
df['Position'][df['SMA_50'] > df['SMA_200']] = 1
df['Position'] = df['Position'].shift(1) # Avoid lookahead bias
# Calculate returns
df['Return'] = df['Close'].pct_change()
df['Strategy_Return'] = df['Return'] * df['Position']
# Performance
cumulative_strategy = (1 + df['Strategy_Return']).cumprod()
cumulative_buy_hold = (1 + df['Return']).cumprod()
# Plotting
plt.figure(figsize=(12, 6))
plt.plot(cumulative_strategy, label='Strategy (Technical Only)')
plt.plot(cumulative_buy_hold, label='Buy & Hold', linestyle='--')
plt.title('Technical Strategy vs Buy & Hold')
plt.legend()
plt.grid(True)
plt.tight_layout()
plt.show()
I can't speak for ACF, but both BoxLang and Lucee use the same codepaths under the hood for executing your query whether you use a tag/component or a BIF.
Here's the relevant files in BoxLang: (both use PendingQuery
)
pip install webdrivermanager
webdrivermanager chrome --linkpath /usr/local/bin
(check if chrome driver is in PATH)
I was about to comment that I had the same issue since a few weeks (Chrome on Windows 11), but when I opened my Chrome settings to specify my version (which was 137), chrome auto-updated. Once relaunched with version 138, Google voices started working again.
La solución es cambiar el AspNetCoreHostingModel de todas las aplicaciones a OutOfProcess en el webconfig
<AspNetCoreHostingModel>OutOfProcess</AspNetCoreHostingModel>
Bigtable's SQL support is now in GA and supports server-side aggregations. In addition to read-time aggregations it is also possible to define incremental materialized views that do rollups/aggregations at ingest-time.
In a lot of real-time analytics applications, one can imagine stacking these e.g. use an incremental materialized view to aggregate clicks into 15 minute windows and then at read time apply filters and GROUP BY to convert pre-aggregated data into a coarser granularity like N hours, days etc.
Here’s what I’d try:
Instead of blocking on city OR postal code, block on city AND the first few digits of postal code — this cuts down your candidate pairs a ton.
Convert city and postal code columns to categorical types to speed up equality checks and save memory.
Instead of doing fuzzy matching row-by-row with apply(), try using Rapidfuzz’s batch functions to vectorize street name similarity.
Keep your early stopping logic, but order components by weight so you can bail out faster.
Increase the number of Dask partitions (like 500+) and if possible run on a distributed cluster for better parallelism.
I have this problem too. There may be alternative ways but why is this not working?
It's better to have server but if you don't have any server you can you HOST.
I was doing a refresher on web application security recently and also asked myself the same question, and became pretty annoyed while trying to understand the answer because even the literature itself seems to mix the mathematical theory with real life implementations.
The top-voted answer explains in length but for some reason fails to state it plainly, so I'd like to make a small addition for any applications developer that stumbles upon this. It obvious enough that we should not use private and shared keys interchangeably, as their names suggest. The question here is: the literature definition of private and public key pairs state that:
Only the private key can decrypt a message encrypted with the public key
Only the public key can decrypt a message encrypted with the private key
Why then, can't one be used in place of the other? Which is a completely legitimate question if you take real-world application out of the question, which literature also often tends to do.
The simple answer pertains to the actual implementations of the exchange algorithm in question, i.e. RSA, and is that the public key can be extracted from the private key contents.
The reason for that is when generating a private key file with RSA using pretty much any known tool in actual practice the resulting file contains both exponents, and therefore, both keys. In fact, when using the openssl
or ssh-keygen
tool, the public key can be re-extracted from the original private key contents at any time: https://stackoverflow.com/a/5246045
Conceptually, neither of the exponents are mathematically "private" or "public", those are just labels assigned upon creation and could easily be assigned in reverse, and deriving one exponent from the other is an equivalent problem from both perspectives. In that sense
Tl;dr *private keys and shared keys are not interchangeable and you must be a good boy and host your private key only on the server/entity that needs to be authenticated by someone else, *and it's equally important to tell you that you should wear a seatbelt while driving your car. The reason why is because generally the private key contents hold information for both keys, and the shared key can be extracted from that. That holds up for pretty much any tool that implements RSA exchange.
Include windows.h
and add this code to the start of your program (within main()
) to get the virtual terminal codes to work in CMD.EXE
:
HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD dwMode = 0;
GetConsoleMode(hOut, &dwMode);
dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING;
SetConsoleMode(hOut, dwMode);
]
should be placed directly after ^
! Hence, [^][,]
is correct; while [^[,]]
is incorrect.
Do you mean button like that?
you can use
composeview.addButton(buttonDescriptor)
InboxSDK.load(1, 'YOUR_APP_ID_HERE').then(function(sdk){
sdk.Compose.registerComposeViewHandler(function(composeView){
composeview.addButton({
title: "title",
onClik: ()=>{console.log("clicked");}
})
});
});
In VS Code, when you hover on the args, small popup will open. it will give you option for edit, clear or clearall.
realloc()
is not intended to re-allocate memory of stack variables (local variables of a function). This might seem trivial but is a fruitful source of accumulating memory bugs. Hence,
uint64_t* memory = NULL;
should be a heap allocation:
uint64_t *memory = (uint64_t *) malloc(sizeof(uint64_t));
import json
import snowflake.connector
def load_config(file_path):
"""Load the Snowflake account configuration from a JSON file."""
with open(file_path, 'r') as file:
return json.load(file)
def connect_to_snowflake(account, username, password, role):
"""Establish a connection to a Snowflake account."""
try:
conn = snowflake.connector.connect(
user=username,
password=password,
account=account,
role=role
)
return conn
except Exception as e:
print(f"Error connecting to Snowflake: {e}")
return None
def fetch_tags(conn):
"""Fetch tag_list and tag_defn from the Snowflake account."""
cursor = conn.cursor()
try:
cursor.execute("""
SELECT tag_list, tag_defn
FROM platform_common.tags;
""")
return cursor.fetchall() # Return all rows from the query
finally:
cursor.close()
def generate_sql_statements(source_tags, target_tags):
"""Generate SQL statements based on the differences in tag_list values."""
sql_statements = []
# Create a set for target tag_list values for easy lookup
target_tags_set = {tag[0]: tag[1] for tag in target_tags} # {tag_list: tag_defn}
# Check for new tags in source that are not in target
for tag in source_tags:
tag_list = tag[0]
tag_defn = tag[1]
if tag_list not in target_tags_set:
# Create statement for the new tag
create_statement = f"INSERT INTO platform_common.tags (tag_list, tag_defn) VALUES ('{tag_sql_statements.append(create_statement)
return sql_statements
def write_output_file(statements, output_file):
"""Write the generated SQL statements to an output file."""
with open(output_file, 'w') as file:
for statement in statements:
file.write(statement + '\n')
def main():
# Load configuration from JSON file
config = load_config('snowflake_config.json')
# Connect to source Snowflake account
source_conn = connect_to_snowflake(
config['source']['account'],
config['source']['username'],
config['source']['password'],
config['source']['role']
)
# Connect to target Snowflake account
target_conn = connect_to_snowflake(
config['target']['account'],
config['target']['username'],
config['target']['password'],
config['target']['role']
)
if source_conn and target_conn:
# Fetch tags from both accounts
source_tags = fetch_tags(source_conn)
target_tags = fetch_tags(target_conn)
# Generate SQL statements based on the comparison
sql_statements = generate_sql_statements(source_tags, target_tags)
# Write the output to a file
write_output_file(sql_statements, 'execution_plan.sql')
print("Execution plan has been generated in 'execution_plan.sql'.")
# Close connections
if source_conn:
source_conn.close()
if target_conn:
target_conn.close()
if _name_ == "_main_":
main()
You can adjust the "window.zoomLevel" in your settings.json file to increase or decrease the font size of the sidebar. To change the font size in the text editor, use "editor.fontSize", and for the terminal, use "terminal.integrated.fontSize".
Once you find the right balance, it should significantly improve your comfort while working.
That is, unless you’re aiming to style a specific tab in the sidebar individually.
What helped for me:
I had pyright installed, so i opened settings by pressing command+, typed @ext:anysphere.cursorpyright
, found Cursorpyright › Analysis: Type Checking Mode
and changed from "basic"
to "off"
.
I spent 4 hours try to configure php.ini and no result
In my case the issue was Avast
disable it and it works fine
You need to create a custom Protocol Mapper in Keycloak to programmatically set the userId value before the token is generated This guide may help you get started and give you a clear idea of the implementation process:
Looks like you sending some http statuses (via completeWithError)
when there are some data already writed into sse stream (http body).
Did you manage to solve this?
Better late than never, you might wanna try this one PHP routes
Git 2.49.0-rc0 finally added the --revision
option to git clone
https://github.com/git/git/commit/337855629f59a3f435dabef900e22202ce8e00e1
git clone --revision=<commit-ish> $OPTIONS $URL
I’ve faced a similar issue while working on a WooCommerce-based WordPress site for one of our clients recently. The WYSIWYG editor (TinyMCE) stopped loading properly, especially in the Product Description field, and we got console errors just like yours.
Here are a few things you can try:
1. Disable All Plugins Temporarily
The error Cannot read properties of undefined (reading 'wcBlocksRegistry') is often related to a conflict with the WooCommerce Blocks or another plugin that’s hooking into the editor.
Go to your plugins list and temporarily deactivate all plugins except WooCommerce.
Then check if the editor loads correctly.
If it does, reactivate each plugin one by one to identify the culprit.
2. Switch to a Default Theme
Sometimes the theme might enqueue scripts that interfere with the block editor. Try switching to a default WordPress theme like Twenty Twenty-Four to rule that out.
3. Clear Browser & Site Cache
This issue can also be caused by cached JavaScript files:
Clear your browser cache
If you're using a caching plugin or CDN (like Cloudflare), purge the cache
4. Reinstall the Classic Editor or Disable Gutenberg (Temporarily)
If you're using a classic setup and don't need Gutenberg, install the Classic Editor plugin and see if that resolves the issue. It can bypass block editor conflicts temporarily.
5. Check for Console Errors on Plugin Pages
Go to WooCommerce > Status > Logs to see if anything unusual is logged when the editor fails to load.
6. Update Everything
Ensure:
WordPress core
WooCommerce
All plugins & themes
...are fully updated. These kinds of undefined JavaScript errors are often fixed in plugin updates.
Let me know what worked — happy to help further if you're still stuck. We had a very similar case at our agency (Digital4Design), and in our case, it was a conflict between an outdated Gutenberg add-on and a WooCommerce update.
For those using Apple or Linux
JAVA_HOME=$(readlink -f "$(which java)" | sed 's#/bin/java##')
CREATE OR REPLACE PROCEDURE platform_common.tags.store_tags()
RETURNS STRING
LANGUAGE SQL
AS
$$
BEGIN
-- Create or replace the table to store the tags
CREATE OR REPLACE TABLE platform_common.tags (
database_name STRING,
schema_name STRING,
tag_name STRING,
comment STRING,
allowed_values STRING,
propagate STRING
);
-- Execute the SHOW TAGS command and store the result
EXECUTE IMMEDIATE 'SHOW TAGS IN ACCOUNT';
-- Insert the results into the tags table
INSERT INTO platform_common.tags (database_name, schema_name, tag_name, comment, allowed_values, SELECT
"database_name",
"schema_name",
"name" AS "tag_name",
"comment",
"allowed_values",
"propagate"
FROM
TABLE(RESULT_SCAN(LAST_QUERY_ID()))
WHERE
"database_name" != 'SNOWFLAKE'
ORDER BY
"created_on";
RETURN 'Tags stored successfully in platform_common.tags';
END;
$$;
Replace
- export HOST_PROJECT_PATH=/home/project/myproject
for
- export HOST_PROJECT_PATH=${BITBUCKET_CLONE_DIR}
I found the problem and would like to share it.
It is possible to save a task without assigning it to a user
In the update function i make this:
$user = User::findOrFail($validated['user_id']);
$customer = Customer::findOrFail($validated['customer_id']);
I finally figured it out. The main issue was that I initially linked my new External ID tenant to an existing subscription that was still associated with my home directory, which caused problems.
To resolve it, I created a new subscription and made sure to assign it directly to the new tenant / directory.
After that, I was able to switch directories again — and this time, MFA worked as expected, and I successfully switched tenants.
Additionally, I now see that I’m assigned the Global Administrator role by default in the new tenant, just as expected and as confirmed in the Microsoft Docs
By default, the user who creates a Microsoft Entra tenant is automatically assigned the Global Administrator role.
In my opinion, a more effective approach would be to interpret a fixed point as a separator between high and low bits. In this case, the scaling becomes arithmetic shifting. For example, decimal float number 5.25 = 101.01 in binary representation.
cpp code for transposing matrix without using new matrix
for(int i=0;i<arr.size();i++){
for(int j=0;j<i;j++){
swap(arr[i][j],arr[j][i]);
}
}
its resolved ? i am facing same issue
Did you add the description field later? Your code looks good actually.
python manage.py search_index --delete -f
python manage.py search_index --create
python manage.py search_index --populate
I found the problem. When we works in the Timer4 interrupt at one point we need to turn on the tim4int using the EIE2 register's thirt bit, so I did EIE2 &= 0x08;
instead of EIE2 |= 0x08;
and that causes to turn off the tim3int because first bit of EIE2 enables tim3int. Thank you...
Replacement of void by Task helps. But it takes long time to find it out after trying everything ...
The label looks off because you're using a small font.
InputLabelProps: {
sx: {
fontSize: '13px',
top: '50%',
transform: 'translateY(-50%)',
'&.MuiInputLabel-shrink': {
top: 0,
transform: 'translateY(0)',
},
},
},
I think I've found the problem. In view.py, for each request, I create a grpc channel. A channel takes time to connect to the server. I think if I send grpc request while the channel is not connected to the server, this error will happen. The code is under development. I will change the view.py to reuse the grpc channel. After that, if the error persists, I will use your suggestion and I will inform you about the result. thanks.
The Problem is that Users like "Everyone" or "Users" do not exist in a geman Windows installation.
they are called "jeder" and "Benutzer"
So there must be a generic way wich i thoght is:
User="[WIX_ACCOUNT_USERS]"
But i can not get it to wrk on Wix 6.01
R is like you are setting the ratings for a show yes it is then you find in the text why you think you can buy what you want then you create an API for the demo then you get something usely in a scrip be a man while driving ford then you know the man is normal but the truck is not so if you complete that and make it rated r you win the truck
In atomics it would be devastating for the guy to be normal and a big truck to talk to ... So you don't do the ratings and you get hershoma i.e to degliate not go off and be in bit pieces .
You get command runs on your phone talking to you ? Well that's my robot talking to you defending it's self I'll have to talk with it before making any more assumptions... Haha it told you what we were doing is rated R.
just update VS to the latest build or use the workaround mentioned in the issue
Try to update node-abi
:
npm install node-abi@latest
The newer version of node-abi
includes support for Electron 37.1.0
.
I have found that in Solution Nuget packages
I managed to find to solve the issue by knowing exactly the number of differences in the first few lines and counting onwards from there.
diff a.txt b.txt &> log.log
wc -l <"log.log") != <known number of differences>
Delete python path from enviroment variables. And then install python again. It will work.
Experience a Smarter Way to Manage Your Fleet
Fleetblox Cloud Garage is compatible with 43 car makes, seamlessly connecting to over 177 million vehicles through a single platform. With global coverage across North America and Europe, our advanced AI-driven solution optimizes fleet management, ensuring maximum operational efficiency and streamlined performance—all in ...see more in https://www.fleetblox.com
On Ubuntu this is enough:
sudo apt-get install libxml2-dev libxslt-dev
Also note, that python-dev
in documentation refers to Python 2 and not needed for Python 3 installations.
I suggest to use Langgraph Studio https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/.
It can show you the graph as you work on it and you can run live tests in graph mode and in chat mode, it's also integrated with LangSmith. Pretty useful for agent development.
Make you Dto
a value class
and it won't be part of the serialized object.
It seems this was user error (kindof). In figma I have a layer that is used as a mask (to clip the svg). Every other program doesnt have an issue with it having a color, but Godot 4.4 seems to apply it anyway.
as you can see, the Mask has a color and a blend mode, and that gets applied in Godot, making it darker. If I set the "Fill" to white, everything looks ok.
According to the information here with Vuforia 11.3 you need Unity 6 (6.0.23f1+)
Use a CTE to evaluate the case, then the main query can filter -
with x as (select ... case this, case that ... from tbls)
select <whatever> from x where <your-filters>;
for this purpose, using a custom dataset is the recommended approach. You can follow this guide:
https://docs.kedro.org/en/0.19.14/data/how_to_create_a_custom_dataset.html#advanced-tutorial-to-create-a-custom-dataset
The implementation should be straightforward - you mainly need to define the _save
method like this:
class YourModelDataset(AbstractDataset):
def __init__(self, filepath: str):
self._filepath = filepath
def _save(self, model: PipelineModel) -> None:
model.save(self._filepath)
Once defined, just register the dataset in your catalog.yml
with the appropriate type and filepath.
Deleting my simulator device in Android Studio and then installing a new one worked for me.
Upgrade to - it should work
<dependency>
<groupId>io.appium</groupId>
<artifactId>java-client</artifactId>
<version>9.5.0</version>
</dependency>
Google TV apps were build on Android TV SDK, which is Java/Kotlin-based. Whereas, Samsung uses Tizen (HTML5/JS) and LG runs on webOS (also HTML5/JS). So, you have to build the separate codebases for each platform. Unfortunately, the Google TV guidelines and UI components won't directly translate to Tizen or webOS due to different runtime environments, design standards, and APIs.
However the good news is:
Your backend logic (APIs, video streams, etc.) can remain the same across platforms.
You can follow a modular approach separating frontend UI logic from core business logic to minimize duplication.
There are some cross-platform frameworks (like React Native for TV or Flutter with custom rendering) but support is limited and usually not production-ready for Samsung/LG.
If you're looking for scale and faster deployment, many businesses go with white-label solutions like VPlayed**, Zapp, or Accedo**, which offer multi-platform Smart TV apps with a unified backend and consistent UX. For more details on accessing a while label cloud tv platform checkout: https://www.vplayed.com/cloud-tv-platform.php
In short yes, separate codebases are required but the strategy you use can save you a lot of time in the long run.
There is option in Postmant now that allows reverting changes in any collection, thus in fork collection as well.
Just click on the collection and in the right side panel click history. Choose where you want to restore changes from.
Done!
It is a bit slow so give it some time to reflect changes.
Ultimately this looks like an opinion based question, I will give you my two cents:
Option 1:
The server application will have to create a framework to remember the values written for each "command" if that is necessary. "Command" implies that writing on a particular command will end up performing certain procedure, and nothing too complicated, in that case Option 1 is a good option. If however, you need to remember the values written for each command (which is implied since you want to support read), then you will have to write code that is already written by whichever BLE stack you are using.
Option 2:
This looks good if you already know the number of commands you will support, hence the number of characteristics you will have, there is an overhead of discovery involved. The client will have to discover all characteristics at the beginning and remember which handle belongs to which UUID and hence which command. This, while involving some extra complication, will provide more flexibility to specify permissions for each command. You could have a command require encryption/authentication/authorization to write to, while keeping other commands with different permissions. You could also have different write properties, commands that only accept read or write, notifications/indications independently for each characteristic, size control for commands that will always have a particular size, better error code responses, etc.
If the requirements are only and exactly as you specified, then both options are fine, you could probably toss a coin and let fate decide, note that my recommendation is the second option.
If there is a possibility to extend functionalities in the future, I will only recommend the second option.
Did you manage to solve this? I had to the do the following which forced the connection to be Microsoft.Data.SqlClient by creating it myself. I also pass in true to the base constructor to let the context dispose the connection after.
using System;
using System.Data.Common;
using System.Data.Entity;
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.Data.SqlClient;
using Mintsoft.LaunchDarkly;
using Mintsoft.UserManagement.Membership;
namespace Mintsoft.UserManagement
{
public class UserDbContext : IdentityDbContext<ApplicationUser>
{
private static DbConnection GetBaseDbConnection()
{
var connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["BaseContext"].ConnectionString;
return new SqlConnection(connectionString);
}
public UserDbContext()
: base(GetBaseDbConnection(), true)
{
}
Yes, there is a way. Described here.
But...what utilizes ForkJoinPool?
ForkJoinPool is designed to split task into smaller tasks and reduce the results of computations in the end (e.g. collections parallel stream)
So If it's not suitable for your case, it would be advisable to use another executorservice.
The view point i can add in solving the problem above is that The key insight from this case is that Flask-Alembic migration errors often mask the real problem: code that accesses the database during module import, before the database is properly initialized. This works locally because tables exist, but fails on fresh deployments.
It's strange that this error happens when you do full reload 🤔.
I believe the problem you are having is due to partial page navigation. This is when SharePoint refreshes just part of the page not the whole website and it means many parts like oninit on useEffect on first run will not rerun and due to that the sp.web may still be relying on an outdated or uninitialized context.
This may happen when you initialize sp.setup({...})
only once, say in onInit
, and it’s not updated on subsequent navigations.
You may find more context in this article.
https://www.blimped.nl/spfx-and-the-potential-pitfall-of-partial-page-navigation/
Hope what I suspect is correct and my comment will help you get unblocked 🤞🤞
Happy Coding!
Personally i developed and use Evernox to manage my databases visually.
In Evernox you can create revisions for each version of your database/diagram.
From these revisions you can automatically generate migrations for different DBMS like postgres, MySql, BigQuery.
It's really convenient to manage your database in a visual diagram editor and then just create revisions for each version, where you can visually inspect the differences