add quotes like Emad Kerhily already said
the below code with reference to above shows a better quality image
Row.Builder().setImage(getIcon(type),Row.IMAGE_TYPE_LARGE)
@mk12, using your above script, I first ran the following command in gnome terminal
curl https://www.broadcastify.com/listen/feed/41475
which yielded what appears to be all of the html source code for the page.
next, I tried adapting your script to the following, which resulted in a blank line followed by the command prompt.
This is what i have:
auth=$(curl -s "https://www.broadcastify.com/listen/feed/41475/" \
| grep webAuth \
| head -n 1 \
| sed 's/^.*"webAuth": "//;s/".*$//')relay_url=$(curl -s "https://www.broadcastify.com/listen/41475" \
-H "webAuth: $auth" -d 't=14' \
| grep -o 'http://[^"]*')audio_url=$(curl -s "$relay_url" | cut -d' ' -f5)
echo "$audio_url"
I have tried replacing the feed number with the $1 variable, and it yielded an error in the cli.
One thing I did notice, was a curious...well what seems to me to be a variable portion of an audio stream link.
link = "https://audio.broadcastify.com/" + a.id + ".mp3";
can you or anyone else shed some light on the "a.id"
portion of the link?
Thank you
This error can be avoided by upgrading Pandas, using pip install --upgrade pandas
.
Update 2025:
If you encounter the error Possible unrecognized qualifier, searching for this term literally
, then you need to use path:
instead of filename:
e.g
your_search_term -path:package-lock.json
Did you find a solution for this?
This was solved by upgrading Pandas through pip install --upgrade pandas
enclose the variable in quotes so that the substituted value is recognized as a string and not as a number:
{
"TicketTitle": "Main Title",
"DateCreated": "{{timestamp}}",
"CreatedBy": "John Doe"
}
Intuition: We know that for a given network N(G(V,E),s,t,c)
, anf a maximum flow f
, every minimal cut in N
is saturated under f
.
As fir out problem, we can infer that the cut {s},V-{s}
is not necessarily minimal - What if we create a network in which the edges adjacent to s
are much larger than those of a minimal cut?
Try to think of a counter example satisfying this condition.
This can be solved by upgrading to the latest version of Pandas, through pip install --upgrade pandas
.
I just ran into the same problem. This works:
@get:io.micronaut.data.annotation.Transient
val batchId: String?
get() = batchIdUUID?.toString() ?: batchIdString
This is not an error. This just means that you already have the latest version of pip.
Did you checked every option in the SES configuration set?
https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets.html
Is the identity that you are using attached to the configuration?
Try running python3 -m pip install --upgrade pip
and then run pip3 -V
or pip3.7 -V
to check your version. This will make sure that the you are upgrading and checking the version of the pip which actually matches the Python version you are using (3.7 in this case).
Resolved "Operand type clash: int is incompatible with date" by using sql CONVERT function.
convert(datetime,@{variables('date_value')});
I ran "ping finance.yahoo.com" and received 100% packet loss. I realized my organization's Wi-Fi was blocking my requests.
Try running python -m pip install --upgrade pip
or python -m pip install --upgrade pip --no-cache-dir
to upgrade pip.
If that doesn't solve your problem, try also running pip install --upgrade setuptools
to upgrade setuptools.
There isn't much you can do about this now. You'll need install all the packages again for the new Python version. For the future, you can follow the instructions at Keeping Installed Packages When Updating Python to avoid this happening again.
It looked like a small mistake and it was. Make sure you import the correct `select` function.
Wrong
from sqlalchemy import select
Correct
from sqlmodel import select
For future visitors who need to persist the session in Playwright after a manual Google login and avoid the block:
Couldn't sign you in. For your protection, you can't sign in from this device. Try again later, or sign in from another device.
I share this solution inspired by the answers from jwallet and Jose A, as their solutions did not work for me as of today separately.
This code allows you to manually log in to Google and save the session state for reuse in future executions. It uses Playwright Extra and a random User-Agent to minimize blocks.
import { createRequire } from 'module';
const require = createRequire(import.meta.url);
const UserAgent = require('user-agents');
import { chromium } from 'playwright-extra';
const optionsBrowser = {
headless: false,
args: [
'--disable-blink-features=AutomationControlled',
'--no-sandbox',
'--disable-web-security',
'--disable-infobars',
'--disable-extensions',
'--start-maximized',
'--window-size=1280,720',
],
};
const browser = await chromium.launch(optionsBrowser);
const optionsContext = {
userAgent: new UserAgent([/Chrome/i, { deviceCategory: 'desktop' }]).userAgent,
locale: 'en-US',
viewport: { width: 1280, height: 720 },
deviceScaleFactor: 1,
};
const context = await browser.newContext(optionsContext);
const page = await context.newPage();
await page.goto('https://accounts.google.com/');
// Give 90 seconds to complete the manual login.
const waitTime = 90_000;
console.log(`You have ${waitTime / 1000} seconds to complete the manual login...`);
await page.waitForTimeout(waitTime);
// Save the session
await page.context().storageState({ path: 'myGoogleAuth.json' });
console.log('Session saved in myGoogleAuth.json');
await browser.close();
Once the session is saved, you can reuse it by loading the myGoogleAuth.json
file in future executions:
const context = await browser.newContext({ storageState: 'myGoogleAuth.json' });
This prevents having to log in every time you run the script.
If the session does not persist, check if Google has invalidated the cookies and try logging in again to generate a new storageState
.
Had to add in about:config "general.useragent.override.localhost" to what the flash file wanted. GameVerStage/9.9.9
I swapped the JSON payload with a different one from https://jsonplaceholder.typicode.com/todos and modified the code to work with the different payload and it worked. I can only conclude that the initial payload is buggy.
I had this error and I eventually found the problem was due to using spaces to separate the columns in the VCF data. Tabs are actually required by the spec and hence implementations for reading VCF.
I was finally able to fix it by adjusting the routes of the livewire component.
No help - but I just got this issue too.
Make sure your project is set to use C++17:
Right-click your project in Solution Explorer.
Go to Properties → C/C++ → Language.
Set C++ Language Standard to ISO C++17 Standard (/std:c++17) or later.
If you want to both put the current line and update the diff, you can combine them like so
:.diffput|diffupdate
Please, delete nodeJS and NextJS from ur PC, thx
If you have a dump file, follow the instructions for a backup restore here.
Session::forget('value')
but it didn't delete
However, when i tried using save method like so
Session::forget('value')
Session::save()
it worked! (I.e. the value was deleted from the session.)
Please - what am I doing wrong? I don't see the save method in the Laravel documentation when using Session::flush() and Session::forget().
$db = db_connect();
$builder = $db->table($this->table)->select('id,CONCAT(nombre,\' \',ap,\' \',am) as nombre,email,dependencia,area');
$resultado = $builder->get();
el detalle es escapar las comillas para poner espacios y esta es una solucion cuando usas los helpers en Codeigniter:
\'espacio\' = mario gonzalez gonzalez
I found this fix:
// Read Text test - Read the text from a txt file and add create a new txt file
// with the original txt file data
public void ReadAndWriteTextTest()
{
// Get the root path
var webRoot = _env.WebRootPath;
// Text with this txt file which is a copy of the MS Word document
// what_we_do_and_how.docm
var inputfile = System.IO.Path.Combine(webRoot, "What_we_do_and_how ENCODED.txt");
// The txt file name to be created.
var outputfile = System.IO.Path.Combine(webRoot, "A6CWrite.txt");
// RC@202503100000 - Get the 1252 specific code page to use with Encoding.RegisterProvider
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
// Read the input txt file data
string inputdata = System.IO.File.ReadAllText(inputfile, System.Text.Encoding.GetEncoding(1252));
// Create and write to the ouput txt file
System.IO.File.WriteAllText(outputfile, inputdata, System.Text.Encoding.GetEncoding(1252));
}
the outputfile now shows the correct encoding :
I've experienced some peculiar performance issues.
It's a bit like MSSQL-Gate compared to "Diesel-Gate"
The thing I've experience is that when memory usage increases to > 80% (by other processes) MSSQL starts to consume CPU without even being used. It's looks like it starts to "touch" all its memory blocks to keep it loaded in RAM. This will degrade all other running processes. This behavior will stop as soon as the memory usage gets lower.
I was looking at the wrong template it was our custon filter that did not use the richtext filter - resolved now
I've tried everything and in the end my problem was that my emulator was not connected to the internet, so make sure to check that! I followed the instructions here to solve the problem:
If you are on Mac - try this - GoTo Apple Icon -> System Preferences -> Network Click on the gear icon on the and select 'Set Service Order' Bring the active interface before other interface. Restart the Android Emulator.
$url_corrigida = str_replace("/", "%2F", $url_original);
Now a new problem in this context: When i dropping folders onto the drop zone, the WebFrame is unloaded after a short period (usually after 1–3 seconds), resulting in a white window. I have global listeners for all drag events, and all of them call preventDefault.
Here are a few examples:
useEffect(() => {
const preventDefaults = (e) => {
e.preventDefault();
};
const dropzone = document.getElementById("select_folder_btn");
["dragenter", "dragstart", "dragend", "dragleave", "dragover", "drag", "drop"].forEach((eventName) => {
dropzone.addEventListener(eventName, preventDefaults);
window.addEventListener(eventName, preventDefaults);
});
});
function handleDropAsync(event: React.DragEvent) {
const droppedItemsPaths = window.electronAPI.getFilePaths(event.dataTransfer.files);
console.log("droppedItems: ", droppedItemsPaths);
Promise.all(
droppedItemsPaths.map((path) =>
window.electronAPI.isDirectory(path).then((isDir) => {
return isDir ? Promise.resolve(path) : window.electronAPI.dirname(path);
})
)
).then((directories) => {
directories = directories.filter((value, index, self) => self.indexOf(value) === index);
if (directories.length > 0) window.electronAPI.invoke("start-analysis", directories);
});
}
The following error is produced by Electron:
Error sending from webFrameMain: Error: Render frame was disposed before WebFrameMain could be accessed
Does anyone have an Electron/React solution to prevent the window from unloading when a drop event occurs?
i delete c:\flutter folder and install flutter , its works for me
Nowadays I use Playback.
It captures func args, local variables and more, with just #>>
macro added in the right place. Also includes function replay.
The values that you traces are placed into Portal, where even large data structures can be easily browsed and reused.
The suppress-ts-errors package does just that. In your TypeScript project directory do:
npx suppress-ts-errors
I have the same problem, but nobody answers yet.
I tried to reproduce the high latency you report with my own Cloud Run Service. Here is my very simple service code:
const express = require('express');
const {PubSub} = require('@google-cloud/pubsub');
const app = express();
const pubsub = new PubSub();
const topic_name = // ... my topic name
const topic =
pubsub.topic(topic_name);
app.post('/', (req, res) => {
const data = JSON.stringify(Math.random());
const message = {
data: Buffer.from(data),
};
console.log('Publish %s start', data);
topic.publishMessage(message)
.then(() => {
console.log('Publish %s done', data);
res.status(200).end();
})
.catch(e => {
console.log('Publish %s failed', data);
res.status(500).end();
});
});
const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
console.log(`cloud-run-service: listening on port ${port}`);
});
What I observe is that the first request to the Cloud Run Service incurs high latency, but subsequent publishes are faster. For the first request, between the "Publish Start" and "Publish Done" logs is ~400ms. When I continue to POST to my Cloud Run Service (at a very slow, 1 request per minute), the subsequent publishes all complete much faster (~50ms).
This is still very low throughput for Pub/Sub and the advice from [1] still applies:
> Pub/Sub is designed for low-latency, high-throughput delivery. If the topic has low throughput, the resources associated with the topic could take longer to initialize.
But the publish latency for subsequent publish requests is much better than the "Cold Start" latency for the Cloud Run Instance / Publisher object.
With regards to your question:
> I have read that pubsub performs poorly under low throughput, but is there a way to make it behave better?
Pub/Sub is optimized for high throughput, but even the very low QPS of my test (1 request per minute) was able to achieve 50ms latencies.
You can get lower latencies by publishing consistently, but it is a latency/cost tradeoff. If you consistently publish "heartbeat" messages to your topic to keep the Cloud Run Instance and Pub/Sub resources "warm", you will get lower single request latencies when you send a real publish request.
You can do this without having to handle those additional meaningless "heartbeat" messages at your subscriber by using filters with your subscription [2] . If you publish messages with an attribute indicating it is a "heartbeat" message, you can create a subscription which filters out the message before it reaches your subscriber. Your single request publish latency from your Cloud Run Service should be consistently lower, but you would have to pay for the extra publish traffic and filtered out "heartbeat" messages [3].
[1] https://cloud.google.com/pubsub/docs/topic-troubleshooting#high_publish_latency
[2] https://cloud.google.com/pubsub/docs/subscription-message-filter
[3] https://cloud.google.com/pubsub/pricing#pubsub
You can also use a pivot table.
It's incredible that it doesn't work is the base of Generics principle
I have a generics class <S,T> where S is a request body and T is response body of a webclient in a reactive mode get new Bearer Token or return latest token, if it doens't expried, and calling server by that token without using .block*() methods in no blocking mode.
In my class I need to use my class which is Client<S, T> and a method return T type responsebody and pass requestbody parameter as S type but using T in below and it doesn't work:
public Mono<T> getResponse( @NotBlank String logHash,
@NotBlank String url,
@NotNull @NotBlank @NotEmpty S requestBody,
@NotNull @NotBlank @NotEmpty Class<T> responseBodyClass) {
return getToken()
.flatMap(token ->
webClient
.post()
.uri(url)
.header("Authorization", "Bearer " + token)
.body(BodyInserters.fromValue(requestBody))
.retrieve()
.onStatus(HttpStatus.INTERNAL_SERVER_ERROR::equals, response -> response.bodyToMono(String.class).map(Exception::new))
.onStatus(HttpStatus.BAD_REQUEST::equals, response -> response.bodyToMono(String.class).map(Exception::new))
.bodyToMono(new ParameterizedTypeReference<T>() {})
.doOnSuccess(response -> log.debug("{} full url {} response {}", logHash, this.serverUrl + url, response))
.doOnError(ex -> {
log.error("{} {}", logHash, ex.getMessage());
ex.printStackTrace();
})
)
.doOnSuccess(token -> log.debug("{} url {} response {}", logHash, this.serverUrl + url, token))
.doOnError(ex -> {
log.error("{} {}", logHash, ex.getMessage());
ex.printStackTrace();
});
I need to substitute in previous code this line (It's incredible because type must be part of this bean customized dinamically by Spring Framework AOP in the code, I think that a solution for AOP Spring Framework is add a final object Class<T> and in runtime substitute T with correct class passed in constructor by @Autowired annotation as private final value not only verify type class on return):
.bodyToMono(new ParameterizedTypeReference<T>() {})
with responseBody Class Type which is a .class of my responseBody to avoid exception because JVM can't find T type for return body object:
.bodyToMono(responseBody)
What type of Generics Type is this implementatio?
I autowire class I pass Type in autowiring how is possibile that this can't find Type?
new ParameterizedTypeReference<T>() {}
caller:
private Client<CheckCustomerBody, ApimResponse> client = null;
And in method that use client it I need to passawhen call it
client = new Client<CheckCustomerBody, ApimResponse>();
client.configure( WebClient.builder(),
ClientType.XXXX_XX,
channelEnum,
logHash,
Optional.empty(),
Optional.empty(),
Optional.empty()
);
return client
.getResponse( logHash,
WebOrderConstants.API_EXT_CHECK,
request,
ApimResponse.class)
.map(apimResponse -> validationProductChange(
apimResponse.getResponse(),
customer.getClientAccountCode(),
logHash
)
? "OK"
: "KO"
)
.doOnError(ex -> {
log.error("{} END {} : Error using d365 service", logHash, prefix);
ex.printStackTrace();
});
I just ran into this new license concept while playing with latest version of the Firegiant toolset.
One price I found, and only price, was 6500/year which sounds crazy.
I followed directions to generate a license, but response was I needed to contact my organization's administrator to get a license. So that was of zero help.
Has anyone gotten a license price quote?
=TEXT(IF(AND(MINUTE(F2)>=15,MINUTE(F2)<45),HOUR(F2)&":30",
IF(MINUTE(F2)>=45,HOUR(F2)+1&":00",
HOUR(F2)&":00")),
"H:MM AM/PM")
A question might this approach lead to a VHDL description of the network.
Simulating the dynamics of large networks is going to be computationally expensive, possibly implementing this in hardware, maintaining the state space description is useful.
This regex filters all other browsers like chrome, brave, firefox on iOS to identify Safari on iOS exclusively.
/iP(ad|hone|od)/.test(window?.navigator?.platform) && !/iOS\//.test(userAgent);
The issue was that the brew formula for [email protected] caused an error with dyld. To solve the problem, I built a boost from the source, installed it, and set LDFLAGS to contain the installation path.
On Ubuntu 24.04.2 I tried a few things and eventually uninstalled the snap package and followed https://github.com/neovim/neovim/blob/master/INSTALL.md#ubuntu, everything worked fine after that.
Xcode 16.2, iOS 18.3.1
This works for me:
Turn off all VPN connections
Turn off recording, SSL Proxying and MacOS Proxy at Charles (also will work with Proxyman, I think).
Also I have tried, but didn't work for me:
Disabling and Enabling Developer Mode
sudo chmod a+w /private/var/tmp/
Remove and restore DeveloperDiskImages folder at Library/Developer
sudo chmod a+w /private/var/tmp/
restart iPhone many times
I am using this code, and it works very well.
Now, due to an expansion and the use of a multi-currency system, it is necessary for the value currently displayed in Euros to change. When an order is placed from the USA, the value from the code should be converted to USD, or when the website language is switched to Czech, it should be converted to Czech koruna at a predefined exchange rate.
How can this be achieved? Currently, the value is displayed without being converted based on the exchange rate of the selected currency.
If the value is set to 100 EUR, when switched to USD, it still shows 100 USD, or when switched to Czech, it shows 100 Czech koruna.
Let longitud = 2
Let ancho = 4
Let área = longitud x ancho
Let perímetro 2 x (longitud + ancho)
Console.log ("el área es:")
Console.log ("el perímetro:")
Sorry guys, but I have managed to beat the performance of all your solutions.
First, some details about my computer: CPU Intel(R) Core(TM) i5-4430 CPU @ 3.00GHz, RAM Kingston DDR3 8GiB * 2, OS Windows 11 Pro 23H2 x64, CPython 3.12.6 x64, NumPy 2.0.2, I am using IPython 8.27.0 on Windows Terminal.
I have tried to implement my idea mentioned in my latest edit to the question, and surprisingly I have actually beaten all submitted solutions and even np.unpackbits
...
First I will describe the ways I have found to generate sequence of integers with periodic gaps.
Say you want to generate a list of numbers that includes every other group of width
numbers, for example, if the width
is 5, if we include the first five natural numbers, we get 0, 1, 2, 3, 4, then we skip the next five numbers and we append 10, 11, 12, 13, 14 to the list, then we skip another five numbers and append 20, 21, 22, 23, 24...
Each group of included numbers has width
of 5, and they all start at a multiple of 5. I have identified that the groups that are included have a floor division of zero when divided by 5.
So if I want every other group of five integers starting at 7 ending at 100 I can do this:
In [35]: nums = np.arange(100)
In [36]: ~((nums - 7) // 5) & 1
Out[36]:
array([1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1])
But this is inefficient (and the first 2 bits should be off). NumPy arrays can be added, so I should instead do this:
In [37]: np.arange(7, 100, 10)[:, None] + np.arange(5)
Out[37]:
array([[ 7, 8, 9, 10, 11],
[ 17, 18, 19, 20, 21],
[ 27, 28, 29, 30, 31],
[ 37, 38, 39, 40, 41],
[ 47, 48, 49, 50, 51],
[ 57, 58, 59, 60, 61],
[ 67, 68, 69, 70, 71],
[ 77, 78, 79, 80, 81],
[ 87, 88, 89, 90, 91],
[ 97, 98, 99, 100, 101]])
But both are inefficient:
In [38]: %timeit column = np.arange(65536, dtype=np.uint16); ~((column - 64) >> 7) & 1
219 μs ± 15.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [39]: %timeit np.arange(64, 65536, 256, dtype=np.uint16)[:, None] + np.arange(128)
61.2 μs ± 2.97 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [40]: %timeit np.tile(np.concatenate([np.zeros(64, dtype=bool), np.ones(64, dtype=bool)]), 512)
17.9 μs ± 662 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Though as you can see we need to use np.concatenate
if the starting position of the tiling isn't a multiple of the tile's length.
I have implemented 5 new functions, binary_codes_8
implements the above idea, though it isn't efficient:
def binary_codes_8(n: int) -> np.ndarray:
dtype = get_dtype(n)
result = np.zeros(((length := 1 << n), n), dtype=bool)
width = 1
for i in range(n - 1, -1, -1):
result[:, i][
np.arange(width, length, (step := width << 1), dtype=dtype)[:, None]
+ np.arange(width)
] = 1
width = step
return result
In [41]: %timeit binary_codes_6(16)
1.14 ms ± 37.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [42]: %timeit binary_codes_8(16)
2.53 ms ± 41.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Instead I have a new idea, instead of generating a two dimensional array, we can just join the columns head to tail to a one dimensional array, this is to make memory access contiguous and to avoid broadcasting assignments per iteration.
def binary_codes_9(n: int) -> np.ndarray:
validate(n)
places = 1 << np.arange(n)
return (
np.concatenate(
[
np.tile(
np.concatenate([np.zeros(a, dtype=bool), np.ones(a, dtype=bool)]), b
)
for a, b in zip(places[::-1], places)
]
)
.reshape((n, 1 << n))
.T
)
In [43]: %timeit binary_codes_9(16)
910 μs ± 26.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
As you can see, it actually beats the np.unpackbits
based solution.
gray_codes_4
implements the same idea as binary_codes_8
, so it is also inefficient:
def gray_codes_4(n: int) -> np.ndarray:
dtype = get_dtype(n)
result = np.zeros(((length := 1 << n), n), dtype=bool)
width = 2
start = 1
for i in range(n - 1, 0, -1):
result[:, i][
np.arange(start, length, (step := width << 1), dtype=dtype)[:, None]
+ np.arange(width)
] = 1
width = step
start <<= 1
result[:, 0][length >> 1 :] = 1
return result
In [44]: %timeit gray_codes_4(16)
2.52 ms ± 161 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
So I tried to implement the idea of binary_codes_9
to gray_codes_5
:
def gray_codes_5(n: int) -> np.ndarray:
m = n - 1
half = 1 << m
column = np.arange(1 << n, dtype=get_dtype(n))
offsets = (1 << np.arange(m - 1, -1, -1, dtype=np.uint8)).tolist()
return (
np.concatenate(
[
np.concatenate([np.zeros(half, dtype=bool), np.ones(half, dtype=bool)]),
*(~((column - a) >> b) & 1 for a, b in zip(offsets, range(m, 0, -1))),
]
)
.reshape((n, 1 << n))
.T
)
In [45]: %timeit gray_codes_5(16)
3.67 ms ± 60.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
It is somehow slower, but the idea is sound, just that the way each column is generated is inefficient.
So I tried again, and this time it even beats binary_codes_9
:
def gray_codes_6(n: int) -> np.ndarray:
validate(n)
if n == 1:
return np.array([(0,), (1,)], dtype=bool)
half = 1 << (n - 1)
places = (1 << np.arange(n - 1, -1, -1)).tolist()
return (
np.concatenate(
[
np.zeros(half, dtype=bool),
np.ones(half, dtype=bool),
*(
np.tile(
np.concatenate(
[
np.zeros(b, dtype=bool),
np.ones(a, dtype=bool),
np.zeros(b, dtype=bool),
]
),
1 << i,
)
for i, (a, b) in enumerate(zip(places, places[1:]))
),
]
)
.reshape((n, 1 << n))
.T
)
In [46]: %timeit gray_codes_6(16)
759 μs ± 19.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
This is the benchmark of all the functions on my IPython interpreter:
In [7]: %timeit binary_codes_0(16)
1.62 ms ± 58.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [8]: %timeit binary_codes_1(16)
829 μs ± 9.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [9]: %timeit binary_codes_2(16)
1.9 ms ± 67.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [10]: %timeit binary_codes_3(16)
1.55 ms ± 9.63 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [11]: %timeit binary_codes_4(16)
1.66 ms ± 40.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [12]: %timeit binary_codes_5(16)
1.8 ms ± 54.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [13]: %timeit binary_codes_6(16)
1.11 ms ± 22.3 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [14]: %timeit binary_codes_7(16)
7.01 ms ± 46.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [15]: %timeit binary_codes_8(16)
2.5 ms ± 57.8 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [16]: %timeit binary_codes_9(16)
887 μs ± 5.43 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [17]: %timeit gray_codes_0(16)
1.65 ms ± 11.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [18]: %timeit gray_codes_1(16)
1.79 ms ± 9.98 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [19]: %timeit gray_codes_2(16)
3.97 ms ± 28.4 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [20]: %timeit gray_codes_3(16)
1.9 ms ± 49.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [21]: %timeit gray_codes_4(16)
2.38 ms ± 33.4 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit gray_codes_5(16)
3.95 ms ± 19.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [23]: %timeit gray_codes_6(16)
718 μs ± 4.91 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [24]: %timeit JR_gray_codes_1(16)
1.42 ms ± 10.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [25]: %timeit gray_codes_nocomment(16)
1.03 ms ± 4.88 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [26]: %timeit JR_gray_codes_2()
809 μs ± 12.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
As you can see, my solutions beat all functions from the other answers.
But how do they scale with larger inputs? To find out, I reused code from my last answer:
import colorsys
import matplotlib.pyplot as plt
import numpy as np
import timeit
from math import ceil
from scipy.interpolate import make_interp_spline
def measure_command(func, *args, **kwargs):
elapsed = timeit.timeit(lambda: func(*args, **kwargs), number=5)
once = elapsed / 5
if elapsed >= 1:
return int(1e9 * once + 0.5)
times = min(1024, ceil(1 / once))
return int(
1e9 * timeit.timeit(lambda: func(*args, **kwargs), number=times) / times + 0.5
)
def test(func):
return [measure_command(func, i) for i in range(1, 21)]
bin6 = binary_codes_9(6)
gra6 = gray_codes_6(6)
execution_times = {}
to_test = [
*((f"binary_codes_{i}", bin6) for i in range(10)),
*((f"gray_codes_{i}", gra6) for i in range(7)),
("JR_gray_codes_1", gra6),
("gray_codes_nocomment", gra6),
]
for func_name, correct in to_test:
func = globals()[func_name]
assert np.array_equal(func(6), correct)
execution_times[func_name] = test(func)
cost_per_item = {
k: [e / (1 << i) for i, e in enumerate(v, start=1)]
for k, v in execution_times.items()
}
largest_execution = sorted(
[(k, v[-1]) for k, v in execution_times.items()], key=lambda x: x[1]
)
average_execution = sorted(
[(k, sum(v) / 20) for k, v in cost_per_item.items()], key=lambda x: x[1]
)
columns = [
sorted([v[i] for v in execution_times.values()], reverse=True) for i in range(20)
]
overall_performance = [
(k, [columns[i].index(e) for i, e in enumerate(v)])
for k, v in execution_times.items()
]
overall_execution = sorted(
[(a, sum(b) / 36) for a, b in overall_performance], key=lambda x: -x[1]
)
In [14]: average_execution
Out[14]:
[('binary_codes_6', 206.31875624656678),
('gray_codes_nocomment', 292.46834869384764),
('binary_codes_5', 326.2715059280396),
('binary_codes_4', 425.4694920539856),
('gray_codes_1', 432.8871788024902),
('gray_codes_4', 440.75454263687135),
('binary_codes_3', 486.1538872718811),
('JR_gray_codes_1', 505.06243762969973),
('gray_codes_0', 518.2342618465424),
('gray_codes_3', 560.9797175884247),
('binary_codes_2', 585.7214835166931),
('binary_codes_8', 593.8069385528564),
('gray_codes_5', 1012.6498884677887),
('gray_codes_6', 1102.2803171157836),
('binary_codes_0', 1102.6035027503967),
('gray_codes_2', 1152.2696633338928),
('binary_codes_1', 1207.228157234192),
('binary_codes_7', 1289.8271428585053),
('binary_codes_9', 1667.4837736606598)]
In [15]: [(a, b / 1e9) for a, b in largest_execution]
Out[15]:
[('gray_codes_6', 0.01664672),
('binary_codes_9', 0.017008553),
('gray_codes_nocomment', 0.0200915),
('binary_codes_1', 0.025319672),
('binary_codes_6', 0.027002585),
('gray_codes_3', 0.038912479),
('binary_codes_2', 0.045456482),
('JR_gray_codes_1', 0.053382224),
('binary_codes_3', 0.054410716),
('binary_codes_4', 0.0555085),
('gray_codes_0', 0.058621065),
('binary_codes_0', 0.0718396),
('binary_codes_5', 0.084661),
('binary_codes_8', 0.085592108),
('gray_codes_1', 0.088250008),
('gray_codes_4', 0.091165908),
('gray_codes_2', 0.093191058),
('binary_codes_7', 0.104509167),
('gray_codes_5', 0.146622829)]
In [16]: overall_execution
Out[16]:
[('binary_codes_6', 9.055555555555555),
('gray_codes_nocomment', 8.88888888888889),
('JR_gray_codes_1', 7.5),
('binary_codes_5', 6.972222222222222),
('binary_codes_4', 6.527777777777778),
('gray_codes_1', 6.0),
('binary_codes_3', 5.916666666666667),
('gray_codes_0', 5.805555555555555),
('gray_codes_3', 4.888888888888889),
('gray_codes_6', 4.555555555555555),
('gray_codes_4', 4.25),
('binary_codes_1', 4.083333333333333),
('binary_codes_2', 4.0),
('binary_codes_9', 3.7222222222222223),
('binary_codes_8', 3.6944444444444446),
('binary_codes_0', 3.361111111111111),
('gray_codes_2', 2.611111111111111),
('gray_codes_5', 2.2777777777777777),
('binary_codes_7', 0.8888888888888888)]
This is puzzling, clearly my new functions gray_codes_6
, binary_codes_9
perform the best for the largest input (20, it means the output will have 1048576 rows), but according to my metrics they somehow score poorly...
Just to sanity check:
In [17]: %timeit binary_codes_1(16)
908 μs ± 24.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [18]: %timeit binary_codes_6(16)
1.12 ms ± 8.18 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [19]: %timeit binary_codes_9(16)
925 μs ± 12.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [20]: %timeit binary_codes_1(20)
17.3 ms ± 205 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit binary_codes_6(20)
28.6 ms ± 233 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [22]: %timeit binary_codes_9(20)
23 ms ± 753 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [23]: %timeit gray_codes_6(16)
854 μs ± 23.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [24]: %timeit JR_gray_codes_1(16)
1.69 ms ± 34 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [25]: %timeit gray_codes_nocomment(16)
1.11 ms ± 31 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [26]: %timeit gray_codes_6(20)
15.9 ms ± 959 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [27]: %timeit gray_codes_nocomment(20)
20.5 ms ± 640 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [28]: %timeit JR_gray_codes_1(20)
48.5 ms ± 4.15 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Indeed, they performed better for larger inputs.
So I tried to graph the performance, the graphs are kind of busy, a lot goings on, but I saw something:
ymax = -1e309
for v in execution_times.values():
ymax = max(max(v), ymax)
rymax = ceil(ymax)
fig, ax = plt.subplots()
ax.axis([0, 20, 0, ymax])
ax.set_xticks(range(1, 21))
ax.set_xticklabels([f"{1<<i}" for i in range(1, 21)])
for k, v in execution_times.items():
x = np.linspace(1, 20, 256)
plt.plot(x, make_interp_spline(range(1, 21), v)(x), label=k)
plt.legend()
plt.show()
fig, ax = plt.subplots()
ax.axis([0, 20, 0, 19])
ax.set_xticks(range(1, 21))
ax.set_yticks(range(20))
ax.set_xticklabels([f"{1<<i}" for i in range(1, 21)])
for k, v in overall_performance:
plt.plot(range(1, 21), v, label=k)
plt.legend()
plt.show()
My smart functions have flatter curves than the inefficient ones, but they are slower for small inputs than the inefficient ones, but the inefficient ones grow much faster than my smart functions.
In my cost_per_item
, in order to get a sensible average of the different execution times for different inputs, I divided the execution time for each input by the corresponding output size, so I get some average number...
But this is wrong, the functions don't scale linearly, the bigger the input, the harder it is to finish the execution in record time.
And in overall_execution
the scoring is also wrong, we care about how does a function scale for ever larger inputs, so if a function completes faster for smaller inputs, it should have less importance:
In [35]: overall_execution1 = overall_execution = sorted(
...: [(a, sum(c << i for i, c in enumerate(b))) for a, b in overall_performance], key=lambda x: -x[1]
...: )
In [36]: overall_execution1
Out[36]:
[('gray_codes_6', 18462984),
('binary_codes_9', 17880641),
('gray_codes_nocomment', 16642805),
('binary_codes_1', 15627986),
('binary_codes_6', 14995433),
('gray_codes_3', 13092872),
('binary_codes_3', 11226678),
('binary_codes_2', 11189287),
('JR_gray_codes_1', 10861503),
('binary_codes_4', 9194493),
('binary_codes_0', 9184718),
('gray_codes_0', 8130547),
('binary_codes_5', 6373604),
('binary_codes_8', 5019700),
('gray_codes_1', 4305913),
('gray_codes_4', 3413060),
('gray_codes_2', 2567962),
('binary_codes_7', 925733),
('gray_codes_5', 210406)]
In [37]: average_execution1 = sorted(
...: [(k, sum(v) / 20) for k, v in execution_times.items()], key=lambda x: x[1]
...: )
In [38]: [(a, round(b / 1e9, 6)) for a, b in average_execution1]
Out[38]:
[('binary_codes_9', 0.001746),
('gray_codes_6', 0.001789),
('gray_codes_nocomment', 0.001967),
('binary_codes_1', 0.002462),
('binary_codes_6', 0.002567),
('gray_codes_3', 0.003661),
('binary_codes_2', 0.004313),
('binary_codes_3', 0.004558),
('JR_gray_codes_1', 0.004716),
('binary_codes_4', 0.005205),
('gray_codes_0', 0.005425),
('binary_codes_0', 0.005544),
('binary_codes_5', 0.007468),
('binary_codes_8', 0.007723),
('gray_codes_1', 0.007831),
('gray_codes_4', 0.008079),
('gray_codes_2', 0.0086),
('binary_codes_7', 0.009952),
('gray_codes_5', 0.013721)]
According to this new metric, it is undeniable that my binary_codes_9
and gray_codes_6
are much faster than the solutions provided by the other answers.
And I selected the fastest 9 functions to graph as proof:
In the last image, the vertical axis signifies how many functions have been beaten by the function corresponding to the graph.
My functions aren't beaten, but since the code from nocomment beats my original solutions I will accept that answer.
For multi-module Maven projects, specify the module using -pl
and execute the Maven command from the root project. For example:
mvn test -pl=service -Dtest=InventoryServiceTest
In my case, I was omitting -pl
and was seeing "No tests were executed!".
I am doing the authentication with google with firebase with fake token and with firebase local suite emulator for testing .
i remove this line in test class .
@get : Rule(order = 1)
val composeRule = createAndroidComposeRule<MainActivity>()
then my code work fine after wasting 1 week .
This seems to be an MSVC/STL bug. Microsoft acknowledges as much in https://developercommunity.visualstudio.com/t/After-calling-std::promise::set_value_at/10205605. At the same time, fixing the bug would break ABI compatibility, which is why they are putting off a fix for now.
Are your classes in different namespaces that it shouldn't be in?
I had this error and when I made the name space consistent as I was copying things from other services, it stopped throwing that error.
Try adding this in settings.json
"editor.fontVariations": true
I have a similar problem, and am modifying this solution. I am learning Python for Data Science, after taking a class in it, and forgetting it. I am relearning it again, and teaching myself data processing: taking a csv and getting it ready for ai algorithms in sklearn.
I am going to replace 'df' with 'data', and the quote split with a dash split.
Try adding an external CSS file and linking to it in the head of the HTML.
X2 is whatever variable you want to average over.. so in my case it was model_average to get the monthly average of my model averaged streamflow data.
In every example, I see that the rule is added from python code. But in my case, its not easy to construct or maintain such rules.
I have several worksheets each with a fixed layout in terms of columns or rows. I select cells within the columns (to avoid summarized values being considered in hierarchical data) and have created conditional formatting within excel.
Ex: RULE: AND(INT($F5) > 95000, $M5 < 0.9) applies to $F$5:$F$10,$F$12:$F$14,$F$16:$F$18,$F$20:$F$25,$F$27:$F$33
There are at least 10 rules per worksheet ... and these can change bi-weekly. So I do not want to keep changing the Python code.
My intention is to clear formatting in the worksheet after filling the cell with the color applied via conditional formatting to help me in review stage (some fills may be added/ removed manually as rules are more of a benchmark and not an enforcing criteria)
But I see two problems:
cell.fill does not return the fill applied via conditional formatting. Rather it seems to contain the fill value applied statically to the cell and saved in the worksheet.
I m using Python 3.13.2 (latest version on date) with OpenPyPxl 3.1.5
However,
worksheet.conditional_formatting.clear()
worksheet.conditional_formatting.get_rules(coord)
worksheet.conditional_formatting.remove(coord, rule)
all of the above statements fail with
AttributeError: 'ConditionalFormattingList' object has no attribute 'clear' / 'get_rules'/ 'remove'
for i in range(1, 33) :
for j in range(1, 8):
cell = worksheet.cell(row = i, column = j)
k = 10+j
trg_cell = worksheet.cell(row = i, column = k)
if cell.fill:
trg_cell.fill = copy(worksheet.cell(row = i, column = j).fill)
'''
worksheet.conditional_formatting.clear()
for i in range(1, 33) :
for j in range(1, 8):
cell = worksheet.cell(row = i, column = j)
coord = cell.coordinate
for rule in worksheet.conditional_formatting.get_rules(coord):
worksheet.conditional_formatting.remove(coord, rule)
'''
'''
# Iterate through the cells to extract the fill colors
for row in worksheet.iter_rows():
for cell in row:
# if its white or greay used for row background
if cell.fill and cell.fill.start_color.index != '00000000' and cell.fill.start_color.index != 'FFF8F8F8':
# Save the fill color
fill = cell.fill
#mycolor = openpyxl.styles.colors.Color('FF00FF00')
#print(str(fill.))
#print(cell.coordinate + ' BG: ' + str(fill.bgColor) + ' FG: ' + str(fill.fgColor) + ' START: ' + str(fill.start_color) + ' END: ' + str(fill.end_color))
#print(cell.coordinate + ' '+ str(cell.fill.start_color.index) + ' ' + str(cell.fill.fgColor.index) + ' ' + str(cell.fill.bgColor.index))
#cell.fill = PatternFill(bgColor=mycolor, fill_type="solid")
# Remove conditional formatting
#cell.fill = PatternFill(start_color=fill.start_color, end_color=fill.start_color, fill_type="solid") #fill_type=fill.fill_type
cell_colors[cell.coordinate] = cell.fill
'''
For me, it seems it was the project. I simply added a new test project to my solution "Unit Test Project (.Net Framework)" and moved all my test (cs) files to that project. All is working now. I had taken the advice of executing my test project within a command window (as suggested above) and the result mentioned that MsTest was legacy. That is what prompted me to just add a new project and move my code files to it. All works now.
If you are on a mac and you are connected to the left port, just switch to the right port.
I do comment this thread for saving time for users of this product.
My history - convert a banch of documents based on Word to jrxml (about 200 different documents).
As i found, in this time there is no straight converter helping to simplify this routine.
My expirience.
Save Word document as HTML (with filter to avoid MS specific tags of markup).
File prepared on step 1 open in Notepad++(i used this) - Find and Replace in extended mode \r\n on ' '(whitespace).
File prepared on step 1 open in Notepad++ - in this file - Find and Replace
in regexp mode <p[^>]*> on <p>, also Find and Replace in regexp mode <span[^>]*> on <span> (better to record macros on that actions).
After these steps you get cleaned source ready for TextField with html markup.
Last action - use gotten markup in step 4 for placing in text field(s) with html-markup property enabled.
P.S: tables will be lost on converting bcs JasperStudio does not convert these tags, i used TextFields with enabled borders.
There are several ways to solve this problem and it depends a lot on the context.
It may be because of the width?
"plotOptions": {
"series": {
"pointWidth": 50,
In my case, the number did not appear at the top, as the graph started too low, I had to adjust the minimum.
"yAxis": [
{
"min":5000
Have you ever tried assert its response?
I am using the Firebase Local suite emulator in my testing and Inside test class i remove this line in test class .
@get : Rule(order = 1)
val composeRule = createAndroidComposeRule<MainActivity>()
then my code work fine after wasting 1 week .
I had a similar issue with MOD13Q1 data. The following steps will allow you to get data in one area. You can then use the same steps with multiple files to create a time series. The full script that I used to plot satellite data timeseries and produce a map is linked here.
Extract horizontal and vertical tile coordinates (h, v) from the file name.
Loop through each line of the data structure to store the variables in a table.
Reproject the sinusoidal coordinates to latitude & longitude with h and v.
Load a kml file of your desired area. This can be made in Google Earth.
Crop the data using "inpolygon".
Is there anyway to get this cmd working from an Azure Devops yaml pipeline, I've tried a few times but it just seems to ignore the setting?
Use this extension:
After installing, select the version of bootstrap (example v5.3) from the bottom left of VSCode:
It will open up the drop-down, then select the Version and Enable Auto-completion.
STARTER CODE
The given web page just contains a block of text for the table of contents page for Call of the Wild by Jack London.
In this exercise, you’ll make this page much easier to read using HTML formatting tags.
YOUR JOB
Using the tags <h1>
, <h2>
, <h3>
, <h4>
, <hr>
, and <em>
, transform this page to look like the page below. Remember that in header tags, the font size gets smaller as the number gets larger (<h1>
is the biggest; <h6>
is the smallest).
The end result should look like this:
The accepted answer works until some CSS demi-god decides otherwise, as shown in the following picture. Fortunately, there is !imporrtant. [!Second equally specific CSS selector is ignored in favor of the first one.1]1
Append --copy-files flag to your build command like this.
npx babel src/lib --out-dir dist --copy-files
Use compute yyyymmdd = mmddyyy * 10000.0001
@Sweeper answer got me thinking whether this can be done in a more platform-agnostic way. And there is.
One can use withTransaction
with a transaction that has disablesAnimations
set to true
.
@MainActor
func withoutAnimations(
perform job: @MainActor @escaping () -> Void,
withDelay delay: RunWithoutAnimationDelay = .defaultAnimationDuration
) {
Task { @MainActor in
try await Task.sleep(for: delay.duration)
try withTransaction(.noAnimationTransaction(), job)
}
}
Delay is necessary for the UI machinery to finish current navigation. Otherwise there will be an error
Update NavigationRequestObserver tried to update multiple times per frame.
Using proposed 1ms delay causes animation glitches. So I've opted for a different delay value.
Full gist here
Usage:
// this changes the top of the navigation path to "Something Else"
path.append("Something Else")
withoutAnimations {
path.removeLast(2)
path.append("Something Else")
}
For future people finding this result
There is an example of using Bert for address matching on Github called GeoRoBERTa
Try checking Logs Explorer for more information as there are logs that are not shown in the Cloud Run logs but are present in the Logs Explorer. Also ensure that Cloud Build API is enabled. You can also refer to this documentation about deploying to Cloud Run from a Git repository.
I had a similar problem using VS for MAC to build a Xamarin APP.
I made this configuration to solve it:
So you can just use variables but that might not be very efficient:
local SUCCESS = 1
local FAILURE = 0
local my_choice = 1
if my_choice == SUCCESS then
print("Success")
else
print("Failure")
end
move 375, 400
let angle = 30
color GREEN
rotate 90
fun draw(size, level) {
if level > 0 {
forward size
rotate angle
draw(0.8 * size, level - 1)
rotate -2 * angle
draw(0.8 * size, level - 1)
rotate angle
forward -size
}
}
draw(80, 7)
Sometimes I had experienced with stuck warning message like you have, but function itself functioning fine. You can clean up the stuck warning message by yourself next way:
Navigate to storage account associated with with your Function App, then in Tables you can clear the error. Error format: "AzureFunctionsDiagnosticEventsDATEHERE"
If you clear the table and the error persists, then you did not fix the underlying problem. You can confirm this by checking the table for new records.
For those wondering how to prevent the editor from opening the source file after a $finish/$stop (saw a comment above, my reputation too low to reply):
set PrefSource(OpenOnFinish) 0
set PrefSource(OpenOnBreak) 0
Source:
ModelSim® GUI Reference Manual, v2024.2
After posting in the SQLite forum, I've got my answers:
"Why is this?" - apparently no real reason, it's "unspecified and untested behavior". It should be fixed in the next release of SQLite, but is not considered a bug and may be reverted later on.
As for a workaround - short of using an updated version of SQLite (as yet unreleased unless building from source), instead of specifying the command as an argument, use echo
and pipe it to sqlite:
$ echo '.tables' | sqlite3 -echo database.db
.tables
stuff
To specify multiple commands, use printf '%s\n'
instead of echo
:
$ printf '%s\n' '.tables' 'SELECT * FROM stuff;' | sqlite3 -echo database.db
.tables
stuff
SELECT * FROM stuff;
apples|67
tissues|10
I have the same question. Basically I need to add a new entry to databases dictionary and be able to start using the new connection without restarting the service.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
},
'new_db': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'new_db_name',
'USER': 'new_db_user',
'PASSWORD': 'new_db_password',
'HOST': 'new_db_host',
'PORT': 'new_db_port',
}
}
The solution that worked for me: Check if you are connected to a VPN. If so, turn it off and try again.
did anyone able to figure this out? i want to capture sip invite message from my android device
For anyone looking for this more recently, go v1.22+ includes a native concat function now:
slices.Concat(a, b, c)
Pyright has reportUnnecessaryTypeIgnoreComment
that flags unnecessary # type: ignore comments as errors.
Add to pyrightconfig.json
:
{
"reportUnnecessaryTypeIgnoreComment": "error"
}
Source: Pyright Documentation describing feature in Type Check Diagnostics Settings
I have a column named objekttypn
which I want to use as categories:
import geopandas as gpd
df = gpd.read_file(r"C:/Users/bera/Desktop/gistest/roads_260.shp")
ax = df.plot(column="objekttypn", cmap="tab20", legend=True,
categorical=True, figsize=(10,10))
The original Pyrogram is abandoned and does not support custom emoji reaction.
You can install a fork which has support for the custom emoji reaction.
MuparserX is not thread-safe because of its use of static variables:
and several other locations.
Interestingly, the answer had nothing to do with what I thought it did. rocker/geospatial had nothing to do with this at all - instead, I had found an error in my fstab file that was hanging and preventing any service from initializing.
tl;dr: Services may fail to start if other services which the system relies on have errors. This may be entirely obscured - so check your services and startup scripts, including /etc/fstab errors, to see if anything is blocking the initialization of other services.
I am looking for a solution to tts from html content with highlighting the current word and keeping the html structure intact for the layout. tts should skip html elements.
We struggled with this for a long time - the solution was to upgrade django-whitenoise: https://github.com/evansd/whitenoise/pull/612
So far it seems to be working.
Unfortunately, the "euidaccess" function has a bug. It returns a different result than "open". I modified my program. I changed the list of supplemental groups to which the user belongs. "euidaccess" still works incorrectly, but the regular "open" correctly opens (or reports an error) the same file. So I gave up using "euidaccess" and (unfortunately) I have to open file to check what access right it has. Of course the files being checked have additional permissions set by ACL and this may be the problem.
I'm my case I had to to fill in some data in the app store connect, as bank info and accept the terms. This helped me to find the solution.
.should("be.visible") will work if your element or parent element doesn't have "hidden" attribute,
cy.get("#__next").should("be.visible") seems working fine as mentioned by @One_Mile_Up